More documentation fixes

This commit is contained in:
Ashwin Bharambe 2024-11-18 17:06:13 -08:00
parent e40404625b
commit 939056e265
2 changed files with 14 additions and 10 deletions

View file

@ -54,7 +54,7 @@ Now you are ready to run Llama Stack with Ollama as the inference provider. You
This method allows you to get started quickly without having to build the distribution code. This method allows you to get started quickly without having to build the distribution code.
```bash ```bash
LLAMA_STACK_PORT=5001 export LLAMA_STACK_PORT=5001
docker run \ docker run \
-it \ -it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \ -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
@ -90,21 +90,23 @@ docker run \
Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available. Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
```bash ```bash
export LLAMA_STACK_PORT=5001
llama stack build --template ollama --image-type conda llama stack build --template ollama --image-type conda
llama stack run ./run.yaml \ llama stack run ./run.yaml \
--port 5001 \ --port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \ --env INFERENCE_MODEL=$INFERENCE_MODEL \
--env OLLAMA_URL=http://127.0.0.1:11434 --env OLLAMA_URL=http://localhost:11434
``` ```
If you are using Llama Stack Safety / Shield APIs, use: If you are using Llama Stack Safety / Shield APIs, use:
```bash ```bash
llama stack run ./run-with-safety.yaml \ llama stack run ./run-with-safety.yaml \
--port 5001 \ --port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \ --env INFERENCE_MODEL=$INFERENCE_MODEL \
--env SAFETY_MODEL=$SAFETY_MODEL \ --env SAFETY_MODEL=$SAFETY_MODEL \
--env OLLAMA_URL=http://127.0.0.1:11434 --env OLLAMA_URL=http://localhost:11434
``` ```

View file

@ -50,7 +50,7 @@ Now you are ready to run Llama Stack with Ollama as the inference provider. You
This method allows you to get started quickly without having to build the distribution code. This method allows you to get started quickly without having to build the distribution code.
```bash ```bash
LLAMA_STACK_PORT=5001 export LLAMA_STACK_PORT=5001
docker run \ docker run \
-it \ -it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \ -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
@ -86,21 +86,23 @@ docker run \
Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available. Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
```bash ```bash
export LLAMA_STACK_PORT=5001
llama stack build --template ollama --image-type conda llama stack build --template ollama --image-type conda
llama stack run ./run.yaml \ llama stack run ./run.yaml \
--port 5001 \ --port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \ --env INFERENCE_MODEL=$INFERENCE_MODEL \
--env OLLAMA_URL=http://127.0.0.1:11434 --env OLLAMA_URL=http://localhost:11434
``` ```
If you are using Llama Stack Safety / Shield APIs, use: If you are using Llama Stack Safety / Shield APIs, use:
```bash ```bash
llama stack run ./run-with-safety.yaml \ llama stack run ./run-with-safety.yaml \
--port 5001 \ --port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \ --env INFERENCE_MODEL=$INFERENCE_MODEL \
--env SAFETY_MODEL=$SAFETY_MODEL \ --env SAFETY_MODEL=$SAFETY_MODEL \
--env OLLAMA_URL=http://127.0.0.1:11434 --env OLLAMA_URL=http://localhost:11434
``` ```