From 51492bd9b6d0f7342677b29c53629dd23d53b027 Mon Sep 17 00:00:00 2001 From: Aidan Reilly <74046732+aireilly@users.noreply.github.com> Date: Sat, 12 Apr 2025 00:26:17 +0100 Subject: [PATCH] docs: Update docs and fix warning in start-stack.sh (#1937) Small docs update and an update for `start-stack.sh` with missing color and if statment logic. # What does this PR do? 1. Makes a small change to start-stack.sh to resolve this error: ```cmd /home/aireilly/.local/lib/python3.13/site-packages/llama_stack/distribution/start_stack.sh: line 76: [: missing ]' ``` 2. Adds a missing $GREEN colour to start-stack.sh 3. Updated `docs/source/getting_started/detailed_tutorial.md` with some small changes and corrections. ## Test Plan Procedures described in `docs/source/getting_started/detailed_tutorial.md` were verified on Linux Fedora 41. --- docs/source/getting_started/detailed_tutorial.md | 6 +++--- llama_stack/distribution/start_stack.sh | 3 ++- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/docs/source/getting_started/detailed_tutorial.md b/docs/source/getting_started/detailed_tutorial.md index 610c0cad5..a1504f249 100644 --- a/docs/source/getting_started/detailed_tutorial.md +++ b/docs/source/getting_started/detailed_tutorial.md @@ -69,7 +69,7 @@ which defines the providers and their settings. Now let's build and run the Llama Stack config for Ollama. ```bash -INFERENCE_MODEL=llama3.2:3b llama stack build --template ollama --image-type conda --run +INFERENCE_MODEL=llama3.2:3b llama stack build --template ollama --image-type conda --image-name llama3-3b-conda --run ``` ::: :::{tab-item} Using a Container @@ -77,10 +77,9 @@ You can use a container image to run the Llama Stack server. We provide several component that works with different inference providers out of the box. For this guide, we will use `llamastack/distribution-ollama` as the container image. If you'd like to build your own image or customize the configurations, please check out [this guide](../references/index.md). - First lets setup some environment variables and create a local directory to mount into the container’s file system. ```bash -export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" +export INFERENCE_MODEL="llama3.2:3b" export LLAMA_STACK_PORT=8321 mkdir -p ~/.llama ``` @@ -223,6 +222,7 @@ Other SDKs are also available, please refer to the [Client SDK](../index.md#clie Now you can run inference using the Llama Stack client SDK. ### i. Create the Script + Create a file `inference.py` and add the following code: ```python from llama_stack_client import LlamaStackClient diff --git a/llama_stack/distribution/start_stack.sh b/llama_stack/distribution/start_stack.sh index 964fcfaf7..d3e13c7dc 100755 --- a/llama_stack/distribution/start_stack.sh +++ b/llama_stack/distribution/start_stack.sh @@ -18,6 +18,7 @@ VIRTUAL_ENV=${VIRTUAL_ENV:-} set -euo pipefail RED='\033[0;31m' +GREEN='\033[0;32m' NC='\033[0m' # No Color error_handler() { @@ -73,7 +74,7 @@ done PYTHON_BINARY="python" case "$env_type" in "venv") - if [ -n "$VIRTUAL_ENV" && "$VIRTUAL_ENV" == "$env_path_or_name" ]; then + if [ -n "$VIRTUAL_ENV" ] && [ "$VIRTUAL_ENV" == "$env_path_or_name" ]; then echo -e "${GREEN}Virtual environment already activated${NC}" >&2 else # Activate virtual environment