diff --git a/zero_to_hero_guide/quickstart.md b/zero_to_hero_guide/quickstart.md index 5245220b9..f8374a068 100644 --- a/zero_to_hero_guide/quickstart.md +++ b/zero_to_hero_guide/quickstart.md @@ -38,10 +38,6 @@ If you're looking for more specific topics like tool calling or agent setup, we ``` **Note**: The supported models for llama stack for now is listed in [here](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/ollama.py#L43) -1. **Download Ollama App**: - - Go to [https://ollama.com/download](https://ollama.com/download). - - Download and unzip `Ollama-darwin.zip`. - - Run the `Ollama` application. --- @@ -111,7 +107,7 @@ After setting up the server, open a new terminal window and verify it's working curl http://localhost:5050/inference/chat_completion \ -H "Content-Type: application/json" \ -d '{ - "model": "llama3.2:1b", + "model": "Llama3.2-3B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Write me a 2-sentence poem about the moon"} @@ -218,4 +214,4 @@ This command initializes the model to interact with your local Llama Stack insta **Explore Example Apps**: Check out [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) for example applications built using Llama Stack. ---- +--- \ No newline at end of file