diff --git a/docs/zero_to_hero_guide/README.md b/docs/zero_to_hero_guide/README.md index 4a09a7e62..941e1f965 100644 --- a/docs/zero_to_hero_guide/README.md +++ b/docs/zero_to_hero_guide/README.md @@ -45,7 +45,7 @@ If you're looking for more specific topics, we have a [Zero to Hero Guide](#next --- -## Install Dependencies and Set Up Environmen +## Install Dependencies and Set Up Environment 1. **Create a Conda Environment**: Create a new Conda environment with Python 3.10: @@ -110,7 +110,7 @@ If you're looking for more specific topics, we have a [Zero to Hero Guide](#next --env SAFETY_MODEL=$SAFETY_MODEL --env OLLAMA_URL=$OLLAMA_URL ``` - Note: Everytime you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model. + Note: Every time you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model. The server will start and listen on `http://localhost:5001`. @@ -191,7 +191,7 @@ You can check the available models with the command `llama-stack-client models l You can also interact with the Llama Stack server using a simple Python script. Below is an example: -### 1. Activate Conda Environmen +### 1. Activate Conda Environment ```bash conda activate ollama