docs: Update docs and fix warning in start-stack.sh (#1937)

Small docs update and an update for `start-stack.sh` with missing color
and if statment logic.

# What does this PR do?
1. Makes a small change to start-stack.sh to resolve this error:
```cmd
/home/aireilly/.local/lib/python3.13/site-packages/llama_stack/distribution/start_stack.sh: line 76: [: missing ]'
```
2. Adds a missing $GREEN colour to start-stack.sh
3. Updated `docs/source/getting_started/detailed_tutorial.md` with some
small changes and corrections.

## Test Plan
Procedures described in
`docs/source/getting_started/detailed_tutorial.md` were verified on
Linux Fedora 41.
This commit is contained in:
Aidan Reilly 2025-04-12 00:26:17 +01:00 committed by GitHub
parent ed58a94b30
commit 51492bd9b6
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 5 additions and 4 deletions

View file

@ -69,7 +69,7 @@ which defines the providers and their settings.
Now let's build and run the Llama Stack config for Ollama.
```bash
INFERENCE_MODEL=llama3.2:3b llama stack build --template ollama --image-type conda --run
INFERENCE_MODEL=llama3.2:3b llama stack build --template ollama --image-type conda --image-name llama3-3b-conda --run
```
:::
:::{tab-item} Using a Container
@ -77,10 +77,9 @@ You can use a container image to run the Llama Stack server. We provide several
component that works with different inference providers out of the box. For this guide, we will use
`llamastack/distribution-ollama` as the container image. If you'd like to build your own image or customize the
configurations, please check out [this guide](../references/index.md).
First lets setup some environment variables and create a local directory to mount into the containers file system.
```bash
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
export INFERENCE_MODEL="llama3.2:3b"
export LLAMA_STACK_PORT=8321
mkdir -p ~/.llama
```
@ -223,6 +222,7 @@ Other SDKs are also available, please refer to the [Client SDK](../index.md#clie
Now you can run inference using the Llama Stack client SDK.
### i. Create the Script
Create a file `inference.py` and add the following code:
```python
from llama_stack_client import LlamaStackClient

View file

@ -18,6 +18,7 @@ VIRTUAL_ENV=${VIRTUAL_ENV:-}
set -euo pipefail
RED='\033[0;31m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
error_handler() {
@ -73,7 +74,7 @@ done
PYTHON_BINARY="python"
case "$env_type" in
"venv")
if [ -n "$VIRTUAL_ENV" && "$VIRTUAL_ENV" == "$env_path_or_name" ]; then
if [ -n "$VIRTUAL_ENV" ] && [ "$VIRTUAL_ENV" == "$env_path_or_name" ]; then
echo -e "${GREEN}Virtual environment already activated${NC}" >&2
else
# Activate virtual environment