chore: update doc (#3857)

# What does this PR do?
follows https://github.com/llamastack/llama-stack/pull/3839

## Test Plan
This commit is contained in:
ehhuang 2025-10-20 10:33:21 -07:00 committed by GitHub
parent 21772de5d3
commit 359df3a37c
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
25 changed files with 6380 additions and 6378 deletions

View file

@ -51,8 +51,8 @@ device: cpu
You can access the HuggingFace trainer via the `starter` distribution:
```bash
llama stack build --distro starter --image-type venv
llama stack run ~/.llama/distributions/starter/starter-run.yaml
llama stack list-deps starter | xargs -L1 uv pip install
llama stack run starter
```
### Usage Example

View file

@ -175,8 +175,7 @@ llama-stack-client benchmarks register \
**1. Start the Llama Stack API Server**
```bash
# Build and run a distribution (example: together)
llama stack build --distro together --image-type venv
llama stack list-deps together | xargs -L1 uv pip install
llama stack run together
```
@ -209,7 +208,7 @@ The playground works with any Llama Stack distribution. Popular options include:
<TabItem value="together" label="Together AI">
```bash
llama stack build --distro together --image-type venv
llama stack list-deps together | xargs -L1 uv pip install
llama stack run together
```
@ -222,7 +221,7 @@ llama stack run together
<TabItem value="ollama" label="Ollama (Local)">
```bash
llama stack build --distro ollama --image-type venv
llama stack list-deps ollama | xargs -L1 uv pip install
llama stack run ollama
```
@ -235,7 +234,7 @@ llama stack run ollama
<TabItem value="meta-reference" label="Meta Reference">
```bash
llama stack build --distro meta-reference --image-type venv
llama stack list-deps meta-reference | xargs -L1 uv pip install
llama stack run meta-reference
```

View file

@ -20,7 +20,8 @@ RAG enables your applications to reference and recall information from external
In one terminal, start the Llama Stack server:
```bash
uv run llama stack build --distro starter --image-type venv --run
llama stack list-deps starter | xargs -L1 uv pip install
llama stack run starter
```
### 2. Connect with OpenAI Client

View file

@ -67,7 +67,7 @@ def get_base_url(self) -> str:
## Testing the Provider
Before running tests, you must have required dependencies installed. This depends on the providers or distributions you are testing. For example, if you are testing the `together` distribution, you should install dependencies via `llama stack build --distro together`.
Before running tests, you must have required dependencies installed. This depends on the providers or distributions you are testing. For example, if you are testing the `together` distribution, install its dependencies with `llama stack list-deps together | xargs -L1 uv pip install`.
### 1. Integration Testing

View file

@ -12,7 +12,7 @@ This avoids the overhead of setting up a server.
```bash
# setup
uv pip install llama-stack
llama stack build --distro starter --image-type venv
llama stack list-deps starter | xargs -L1 uv pip install
```
```python

View file

@ -59,7 +59,7 @@ Start a Llama Stack server on localhost. Here is an example of how you can do th
uv venv starter --python 3.12
source starter/bin/activate # On Windows: starter\Scripts\activate
pip install --no-cache llama-stack==0.2.2
llama stack build --distro starter --image-type venv
llama stack list-deps starter | xargs -L1 uv pip install
export FIREWORKS_API_KEY=<SOME_KEY>
llama stack run starter --port 5050
```

View file

@ -166,10 +166,10 @@ docker run \
### Via venv
Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
Install the distribution dependencies before launching:
```bash
llama stack build --distro dell --image-type venv
llama stack list-deps dell | xargs -L1 uv pip install
INFERENCE_MODEL=$INFERENCE_MODEL \
DEH_URL=$DEH_URL \
CHROMA_URL=$CHROMA_URL \

View file

@ -81,10 +81,10 @@ docker run \
### Via venv
Make sure you have done `uv pip install llama-stack` and have the Llama Stack CLI available.
Make sure you have the Llama Stack CLI available.
```bash
llama stack build --distro meta-reference-gpu --image-type venv
llama stack list-deps meta-reference-gpu | xargs -L1 uv pip install
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
llama stack run distributions/meta-reference-gpu/run.yaml \
--port 8321

View file

@ -136,11 +136,11 @@ docker run \
### Via venv
If you've set up your local development environment, you can also build the image using your local virtual environment.
If you've set up your local development environment, you can also install the distribution dependencies using your local virtual environment.
```bash
INFERENCE_MODEL=meta-llama/Llama-3.1-8B-Instruct
llama stack build --distro nvidia --image-type venv
llama stack list-deps nvidia | xargs -L1 uv pip install
NVIDIA_API_KEY=$NVIDIA_API_KEY \
INFERENCE_MODEL=$INFERENCE_MODEL \
llama stack run ./run.yaml \

View file

@ -240,6 +240,6 @@ additional_pip_packages:
- sqlalchemy[asyncio]
```
No other steps are required other than `llama stack build` and `llama stack run`. The build process will use `module` to install all of the provider dependencies, retrieve the spec, etc.
No other steps are required beyond installing dependencies with `llama stack list-deps <distro> | xargs -L1 uv pip install` and then running `llama stack run`. The CLI will use `module` to install the provider dependencies, retrieve the spec, etc.
The provider will now be available in Llama Stack with the type `remote::ramalama`.