From 4c295796bc93314abbef23be12fbc8d95fd8791b Mon Sep 17 00:00:00 2001 From: Sanyam Bhutani Date: Thu, 21 Nov 2024 11:21:28 -0800 Subject: [PATCH] Update quickstart.md --- zero_to_hero_guide/quickstart.md | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/zero_to_hero_guide/quickstart.md b/zero_to_hero_guide/quickstart.md index f8374a068..c18d0ff03 100644 --- a/zero_to_hero_guide/quickstart.md +++ b/zero_to_hero_guide/quickstart.md @@ -1,3 +1,23 @@ +# Quickstart Guide + +Llama-Stack allows you to configure your distribution from various providers, allowing you to focus on going from zero to production super fast. + +This guide will walk you through how to build a local distribution, using ollama as an inference provider. + +We also have a set of notebooks walking you through how to use Llama-Stack APIs: + +- Inference +- Prompt Engineering +- Chatting with Images +- Tool Calling +- Memory API for RAG +- Safety API +- Agentic API + +Below, we will learn how to get started with Ollama as an inference provider, please note the steps for configuring your provider will vary a little depending on the service. However, the user experience will remain universal-this is the power of Llama-Stack. + +Prototype locally using Ollama, deploy to the cloud with your favorite provider or own deployment. Use any API from any provider while focussing on development. + # Ollama Quickstart Guide This guide will walk you through setting up an end-to-end workflow with Llama Stack with ollama, enabling you to perform text generation using the `Llama3.2-1B-Instruct` model. Follow these steps to get started quickly. @@ -82,6 +102,13 @@ If you're looking for more specific topics like tool calling or agent setup, we llama stack build --template ollama --image-type conda ``` +After this step, you will see the console output: +``` +Build Successful! Next steps: + 1. Set the environment variables: LLAMASTACK_PORT, OLLAMA_URL, INFERENCE_MODEL, SAFETY_MODEL + 2. `llama stack run /Users/username/.llama/distributions/llamastack-ollama/ollama-run.yaml` +``` + 2. **Edit Configuration**: - Modify the `ollama-run.yaml` file located at `/Users/yourusername/.llama/distributions/llamastack-ollama/ollama-run.yaml`: - Change the `chromadb` port to `8000`. @@ -214,4 +241,4 @@ This command initializes the model to interact with your local Llama Stack insta **Explore Example Apps**: Check out [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) for example applications built using Llama Stack. ---- \ No newline at end of file +---