docs: update docs to use llama stack list-deps.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
This commit is contained in:
Charlie Doern 2025-10-09 15:34:42 -04:00
parent 62719f1b8a
commit 7d4530a4c4
7 changed files with 33 additions and 14 deletions

View file

@ -158,9 +158,9 @@ under the LICENSE file in the root directory of this source tree.
Some tips about common tasks you work on while contributing to Llama Stack:
### Using `llama stack build`
### Installing dependencies of distributions
Building a stack image will use the production version of the `llama-stack` and `llama-stack-client` packages. If you are developing with a llama-stack repository checked out and need your code to be reflected in the stack image, set `LLAMA_STACK_DIR` and `LLAMA_STACK_CLIENT_DIR` to the appropriate checked out directories when running any of the `llama` CLI commands.
When installing dependencies for a distribution, you can use `llama stack list-deps` to view and install the required packages.
Example:
```bash
@ -168,7 +168,12 @@ cd work/
git clone https://github.com/llamastack/llama-stack.git
git clone https://github.com/llamastack/llama-stack-client-python.git
cd llama-stack
LLAMA_STACK_DIR=$(pwd) LLAMA_STACK_CLIENT_DIR=../llama-stack-client-python llama stack build --distro <...>
# Show dependencies for a distribution
llama stack list-deps <distro-name>
# Install dependencies
llama stack list-deps <distro-name> | xargs -L1 uv pip install
```
### Updating distribution configurations

View file

@ -27,8 +27,11 @@ MODEL="Llama-4-Scout-17B-16E-Instruct"
# get meta url from llama.com
llama model download --source meta --model-id $MODEL --meta-url <META_URL>
# install dependencies for the distribution
llama stack list-deps meta-reference-gpu | xargs -L1 uv pip install
# start a llama stack server
INFERENCE_MODEL=meta-llama/$MODEL llama stack build --run --template meta-reference-gpu
INFERENCE_MODEL=meta-llama/$MODEL llama stack run meta-reference-gpu
# install client to interact with the server
pip install llama-stack-client

View file

@ -158,7 +158,7 @@ under the LICENSE file in the root directory of this source tree.
Some tips about common tasks you work on while contributing to Llama Stack:
### Using `llama stack build`
### Installing dependencies of distributions
Building a stack image will use the production version of the `llama-stack` and `llama-stack-client` packages. If you are developing with a llama-stack repository checked out and need your code to be reflected in the stack image, set `LLAMA_STACK_DIR` and `LLAMA_STACK_CLIENT_DIR` to the appropriate checked out directories when running any of the `llama` CLI commands.
@ -168,7 +168,7 @@ cd work/
git clone https://github.com/meta-llama/llama-stack.git
git clone https://github.com/meta-llama/llama-stack-client-python.git
cd llama-stack
LLAMA_STACK_DIR=$(pwd) LLAMA_STACK_CLIENT_DIR=../llama-stack-client-python llama stack build --distro <...>
llama stack build --distro <...>
```
### Updating distribution configurations

View file

@ -169,7 +169,11 @@ docker run \
Ensure you have configured the starter distribution using the environment variables explained above.
```bash
uv run --with llama-stack llama stack build --distro starter --image-type venv --run
# Install dependencies for the starter distribution
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
# Run the server
uv run --with llama-stack llama stack run starter
```
## Example Usage

View file

@ -58,15 +58,19 @@ Llama Stack is a server that exposes multiple APIs, you connect with it using th
<Tabs>
<TabItem value="venv" label="Using venv">
You can use Python to build and run the Llama Stack server, which is useful for testing and development.
You can use Python to install dependencies and run the Llama Stack server, which is useful for testing and development.
Llama Stack uses a [YAML configuration file](../distributions/configuration) to specify the stack setup,
which defines the providers and their settings. The generated configuration serves as a starting point that you can [customize for your specific needs](../distributions/customizing_run_yaml).
Now let's build and run the Llama Stack config for Ollama.
Now let's install dependencies and run the Llama Stack config for Ollama.
We use `starter` as template. By default all providers are disabled, this requires enable ollama by passing environment variables.
```bash
llama stack build --distro starter --image-type venv --run
# Install dependencies for the starter distribution
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
# Run the server
llama stack run starter
```
</TabItem>
<TabItem value="container" label="Using a Container">

View file

@ -24,10 +24,13 @@ ollama run llama3.2:3b --keepalive 60m
#### Step 2: Run the Llama Stack server
We will use `uv` to run the Llama Stack server.
We will use `uv` to install dependencies and run the Llama Stack server.
```bash
OLLAMA_URL=http://localhost:11434 \
uv run --with llama-stack llama stack build --distro starter --image-type venv --run
# Install dependencies for the starter distribution
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
# Run the server
OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack run starter
```
#### Step 3: Run the demo
Now open up a new terminal and copy the following script into a file named `demo_script.py`.

View file

@ -10061,4 +10061,4 @@ x-tagGroups:
- PostTraining (Coming Soon)
- Safety
- Telemetry
- VectorIO
- VectorIO