mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-15 12:42:42 +00:00
docs: update docs to use llama stack list-deps.
Signed-off-by: Charlie Doern <cdoern@redhat.com>
This commit is contained in:
parent
62719f1b8a
commit
7d4530a4c4
7 changed files with 33 additions and 14 deletions
|
|
@ -158,9 +158,9 @@ under the LICENSE file in the root directory of this source tree.
|
||||||
|
|
||||||
Some tips about common tasks you work on while contributing to Llama Stack:
|
Some tips about common tasks you work on while contributing to Llama Stack:
|
||||||
|
|
||||||
### Using `llama stack build`
|
### Installing dependencies of distributions
|
||||||
|
|
||||||
Building a stack image will use the production version of the `llama-stack` and `llama-stack-client` packages. If you are developing with a llama-stack repository checked out and need your code to be reflected in the stack image, set `LLAMA_STACK_DIR` and `LLAMA_STACK_CLIENT_DIR` to the appropriate checked out directories when running any of the `llama` CLI commands.
|
When installing dependencies for a distribution, you can use `llama stack list-deps` to view and install the required packages.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
```bash
|
```bash
|
||||||
|
|
@ -168,7 +168,12 @@ cd work/
|
||||||
git clone https://github.com/llamastack/llama-stack.git
|
git clone https://github.com/llamastack/llama-stack.git
|
||||||
git clone https://github.com/llamastack/llama-stack-client-python.git
|
git clone https://github.com/llamastack/llama-stack-client-python.git
|
||||||
cd llama-stack
|
cd llama-stack
|
||||||
LLAMA_STACK_DIR=$(pwd) LLAMA_STACK_CLIENT_DIR=../llama-stack-client-python llama stack build --distro <...>
|
|
||||||
|
# Show dependencies for a distribution
|
||||||
|
llama stack list-deps <distro-name>
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
llama stack list-deps <distro-name> | xargs -L1 uv pip install
|
||||||
```
|
```
|
||||||
|
|
||||||
### Updating distribution configurations
|
### Updating distribution configurations
|
||||||
|
|
|
||||||
|
|
@ -27,8 +27,11 @@ MODEL="Llama-4-Scout-17B-16E-Instruct"
|
||||||
# get meta url from llama.com
|
# get meta url from llama.com
|
||||||
llama model download --source meta --model-id $MODEL --meta-url <META_URL>
|
llama model download --source meta --model-id $MODEL --meta-url <META_URL>
|
||||||
|
|
||||||
|
# install dependencies for the distribution
|
||||||
|
llama stack list-deps meta-reference-gpu | xargs -L1 uv pip install
|
||||||
|
|
||||||
# start a llama stack server
|
# start a llama stack server
|
||||||
INFERENCE_MODEL=meta-llama/$MODEL llama stack build --run --template meta-reference-gpu
|
INFERENCE_MODEL=meta-llama/$MODEL llama stack run meta-reference-gpu
|
||||||
|
|
||||||
# install client to interact with the server
|
# install client to interact with the server
|
||||||
pip install llama-stack-client
|
pip install llama-stack-client
|
||||||
|
|
|
||||||
|
|
@ -158,7 +158,7 @@ under the LICENSE file in the root directory of this source tree.
|
||||||
|
|
||||||
Some tips about common tasks you work on while contributing to Llama Stack:
|
Some tips about common tasks you work on while contributing to Llama Stack:
|
||||||
|
|
||||||
### Using `llama stack build`
|
### Installing dependencies of distributions
|
||||||
|
|
||||||
Building a stack image will use the production version of the `llama-stack` and `llama-stack-client` packages. If you are developing with a llama-stack repository checked out and need your code to be reflected in the stack image, set `LLAMA_STACK_DIR` and `LLAMA_STACK_CLIENT_DIR` to the appropriate checked out directories when running any of the `llama` CLI commands.
|
Building a stack image will use the production version of the `llama-stack` and `llama-stack-client` packages. If you are developing with a llama-stack repository checked out and need your code to be reflected in the stack image, set `LLAMA_STACK_DIR` and `LLAMA_STACK_CLIENT_DIR` to the appropriate checked out directories when running any of the `llama` CLI commands.
|
||||||
|
|
||||||
|
|
@ -168,7 +168,7 @@ cd work/
|
||||||
git clone https://github.com/meta-llama/llama-stack.git
|
git clone https://github.com/meta-llama/llama-stack.git
|
||||||
git clone https://github.com/meta-llama/llama-stack-client-python.git
|
git clone https://github.com/meta-llama/llama-stack-client-python.git
|
||||||
cd llama-stack
|
cd llama-stack
|
||||||
LLAMA_STACK_DIR=$(pwd) LLAMA_STACK_CLIENT_DIR=../llama-stack-client-python llama stack build --distro <...>
|
llama stack build --distro <...>
|
||||||
```
|
```
|
||||||
|
|
||||||
### Updating distribution configurations
|
### Updating distribution configurations
|
||||||
|
|
|
||||||
|
|
@ -169,7 +169,11 @@ docker run \
|
||||||
Ensure you have configured the starter distribution using the environment variables explained above.
|
Ensure you have configured the starter distribution using the environment variables explained above.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uv run --with llama-stack llama stack build --distro starter --image-type venv --run
|
# Install dependencies for the starter distribution
|
||||||
|
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||||
|
|
||||||
|
# Run the server
|
||||||
|
uv run --with llama-stack llama stack run starter
|
||||||
```
|
```
|
||||||
|
|
||||||
## Example Usage
|
## Example Usage
|
||||||
|
|
|
||||||
|
|
@ -58,15 +58,19 @@ Llama Stack is a server that exposes multiple APIs, you connect with it using th
|
||||||
|
|
||||||
<Tabs>
|
<Tabs>
|
||||||
<TabItem value="venv" label="Using venv">
|
<TabItem value="venv" label="Using venv">
|
||||||
You can use Python to build and run the Llama Stack server, which is useful for testing and development.
|
You can use Python to install dependencies and run the Llama Stack server, which is useful for testing and development.
|
||||||
|
|
||||||
Llama Stack uses a [YAML configuration file](../distributions/configuration) to specify the stack setup,
|
Llama Stack uses a [YAML configuration file](../distributions/configuration) to specify the stack setup,
|
||||||
which defines the providers and their settings. The generated configuration serves as a starting point that you can [customize for your specific needs](../distributions/customizing_run_yaml).
|
which defines the providers and their settings. The generated configuration serves as a starting point that you can [customize for your specific needs](../distributions/customizing_run_yaml).
|
||||||
Now let's build and run the Llama Stack config for Ollama.
|
Now let's install dependencies and run the Llama Stack config for Ollama.
|
||||||
We use `starter` as template. By default all providers are disabled, this requires enable ollama by passing environment variables.
|
We use `starter` as template. By default all providers are disabled, this requires enable ollama by passing environment variables.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
llama stack build --distro starter --image-type venv --run
|
# Install dependencies for the starter distribution
|
||||||
|
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||||
|
|
||||||
|
# Run the server
|
||||||
|
llama stack run starter
|
||||||
```
|
```
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem value="container" label="Using a Container">
|
<TabItem value="container" label="Using a Container">
|
||||||
|
|
|
||||||
|
|
@ -24,10 +24,13 @@ ollama run llama3.2:3b --keepalive 60m
|
||||||
|
|
||||||
#### Step 2: Run the Llama Stack server
|
#### Step 2: Run the Llama Stack server
|
||||||
|
|
||||||
We will use `uv` to run the Llama Stack server.
|
We will use `uv` to install dependencies and run the Llama Stack server.
|
||||||
```bash
|
```bash
|
||||||
OLLAMA_URL=http://localhost:11434 \
|
# Install dependencies for the starter distribution
|
||||||
uv run --with llama-stack llama stack build --distro starter --image-type venv --run
|
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||||
|
|
||||||
|
# Run the server
|
||||||
|
OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack run starter
|
||||||
```
|
```
|
||||||
#### Step 3: Run the demo
|
#### Step 3: Run the demo
|
||||||
Now open up a new terminal and copy the following script into a file named `demo_script.py`.
|
Now open up a new terminal and copy the following script into a file named `demo_script.py`.
|
||||||
|
|
|
||||||
2
docs/static/deprecated-llama-stack-spec.yaml
vendored
2
docs/static/deprecated-llama-stack-spec.yaml
vendored
|
|
@ -10061,4 +10061,4 @@ x-tagGroups:
|
||||||
- PostTraining (Coming Soon)
|
- PostTraining (Coming Soon)
|
||||||
- Safety
|
- Safety
|
||||||
- Telemetry
|
- Telemetry
|
||||||
- VectorIO
|
- VectorIO
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue