From 2fc1c16d5864a3a0a82b0e1d5048465dfb74f12c Mon Sep 17 00:00:00 2001
From: Jeffrey Lind <124309394+JeffreyLind3@users.noreply.github.com>
Date: Fri, 29 Nov 2024 11:12:53 -0500
Subject: [PATCH] Fix Zero to Hero README.md Formatting (#546)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
# What does this PR do?
The formatting shown in the picture below in the Zero to Hero README.md
was fixed with this PR (also shown in a picture below).
**Before**
**After**
---
docs/zero_to_hero_guide/README.md | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/docs/zero_to_hero_guide/README.md b/docs/zero_to_hero_guide/README.md
index 09a4a6d50..5490f767f 100644
--- a/docs/zero_to_hero_guide/README.md
+++ b/docs/zero_to_hero_guide/README.md
@@ -120,13 +120,13 @@ export SAFETY_MODEL="meta-llama/Llama-Guard-3-1B"
3. **Run the Llama Stack**:
- Run the stack with command shared by the API from earlier:
- ```bash
- llama stack run ollama \
- --port $LLAMA_STACK_PORT \
- --env INFERENCE_MODEL=$INFERENCE_MODEL \
- --env SAFETY_MODEL=$SAFETY_MODEL \
- --env OLLAMA_URL=http://localhost:11434
- ```
+ ```bash
+ llama stack run ollama \
+ --port $LLAMA_STACK_PORT \
+ --env INFERENCE_MODEL=$INFERENCE_MODEL \
+ --env SAFETY_MODEL=$SAFETY_MODEL \
+ --env OLLAMA_URL=http://localhost:11434
+ ```
Note: Everytime you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model