forked from phoenix-oss/llama-stack-mirror
- Added new template `dell` and its documentation - Update docs - [minor] uv fix i came across - codegen for all templates Tested with ```bash export INFERENCE_PORT=8181 export DEH_URL=http://0.0.0.0:$INFERENCE_PORT export INFERENCE_MODEL=meta-llama/Llama-3.1-8B-Instruct export CHROMADB_HOST=localhost export CHROMADB_PORT=6601 export CHROMA_URL=[http://$CHROMADB_HOST:$CHROMADB_PORT](about:blank) export CUDA_VISIBLE_DEVICES=0 export LLAMA_STACK_PORT=8321 # build the stack template llama stack build --template=dell # start the TGI inference server podman run --rm -it --network host -v $HOME/.cache/huggingface:/data -e HF_TOKEN=$HF_TOKEN -p $INFERENCE_PORT:$INFERENCE_PORT --gpus $CUDA_VISIBLE_DEVICES [ghcr.io/huggingface/text-generation-inference](http://ghcr.io/huggingface/text-generation-inference) --dtype bfloat16 --usage-stats off --sharded false --cuda-memory-fraction 0.7 --model-id $INFERENCE_MODEL --port $INFERENCE_PORT --hostname 0.0.0.0 # start chroma-db for vector-io ( aka RAG ) podman run --rm -it --network host --name chromadb -v .:/chroma/chroma -e IS_PERSISTENT=TRUE chromadb/chroma:latest --port $CHROMADB_PORT --host $(hostname) # build docker llama stack build --template=dell --image-type=container # run llama stack server ( via docker ) podman run -it \ --network host \ -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \ -v ~/.llama:/root/.llama \ # NOTE: mount the llama-stack / llama-model directories if testing local changes -v /home/hjshah/git/llama-stack:/app/llama-stack-source -v /home/hjshah/git/llama-models:/app/llama-models-source \ localhost/distribution-dell:dev \ --port $LLAMA_STACK_PORT \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ --env DEH_URL=$DEH_URL \ --env CHROMA_URL=$CHROMA_URL # test the server cd <PATH_TO_LLAMA_STACK_REPO> LLAMA_STACK_BASE_URL=http://0.0.0.0:$LLAMA_STACK_PORT pytest -s -v tests/client-sdk/agents/test_agents.py ``` --------- Co-authored-by: Hardik Shah <hjshah@fb.com>
163 lines
5.3 KiB
Markdown
163 lines
5.3 KiB
Markdown
<!-- This file was auto-generated by distro_codegen.py, please edit source -->
|
|
---
|
|
orphan: true
|
|
---
|
|
# Remote vLLM Distribution
|
|
```{toctree}
|
|
:maxdepth: 2
|
|
:hidden:
|
|
|
|
self
|
|
```
|
|
|
|
The `llamastack/distribution-remote-vllm` distribution consists of the following provider configurations:
|
|
|
|
| API | Provider(s) |
|
|
|-----|-------------|
|
|
| agents | `inline::meta-reference` |
|
|
| datasetio | `remote::huggingface`, `inline::localfs` |
|
|
| eval | `inline::meta-reference` |
|
|
| inference | `remote::vllm` |
|
|
| safety | `inline::llama-guard` |
|
|
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
|
| telemetry | `inline::meta-reference` |
|
|
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::code-interpreter`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
|
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
|
|
|
|
|
You can use this distribution if you have GPUs and want to run an independent vLLM server container for running inference.
|
|
|
|
### Environment Variables
|
|
|
|
The following environment variables can be configured:
|
|
|
|
- `LLAMA_STACK_PORT`: Port for the Llama Stack distribution server (default: `5001`)
|
|
- `INFERENCE_MODEL`: Inference model loaded into the vLLM server (default: `meta-llama/Llama-3.2-3B-Instruct`)
|
|
- `VLLM_URL`: URL of the vLLM server with the main inference model (default: `http://host.docker.internal:5100/v1`)
|
|
- `MAX_TOKENS`: Maximum number of tokens for generation (default: `4096`)
|
|
- `SAFETY_VLLM_URL`: URL of the vLLM server with the safety model (default: `http://host.docker.internal:5101/v1`)
|
|
- `SAFETY_MODEL`: Name of the safety (Llama-Guard) model to use (default: `meta-llama/Llama-Guard-3-1B`)
|
|
|
|
|
|
## Setting up vLLM server
|
|
|
|
Please check the [vLLM Documentation](https://docs.vllm.ai/en/v0.5.5/serving/deploying_with_docker.html) to get a vLLM endpoint. Here is a sample script to start a vLLM server locally via Docker:
|
|
|
|
```bash
|
|
export INFERENCE_PORT=8000
|
|
export INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
|
|
export CUDA_VISIBLE_DEVICES=0
|
|
|
|
docker run \
|
|
--runtime nvidia \
|
|
--gpus $CUDA_VISIBLE_DEVICES \
|
|
-v ~/.cache/huggingface:/root/.cache/huggingface \
|
|
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
|
|
-p $INFERENCE_PORT:$INFERENCE_PORT \
|
|
--ipc=host \
|
|
vllm/vllm-openai:latest \
|
|
--gpu-memory-utilization 0.7 \
|
|
--model $INFERENCE_MODEL \
|
|
--port $INFERENCE_PORT
|
|
```
|
|
|
|
If you are using Llama Stack Safety / Shield APIs, then you will need to also run another instance of a vLLM with a corresponding safety model like `meta-llama/Llama-Guard-3-1B` using a script like:
|
|
|
|
```bash
|
|
export SAFETY_PORT=8081
|
|
export SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
|
|
export CUDA_VISIBLE_DEVICES=1
|
|
|
|
docker run \
|
|
--runtime nvidia \
|
|
--gpus $CUDA_VISIBLE_DEVICES \
|
|
-v ~/.cache/huggingface:/root/.cache/huggingface \
|
|
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
|
|
-p $SAFETY_PORT:$SAFETY_PORT \
|
|
--ipc=host \
|
|
vllm/vllm-openai:latest \
|
|
--gpu-memory-utilization 0.7 \
|
|
--model $SAFETY_MODEL \
|
|
--port $SAFETY_PORT
|
|
```
|
|
|
|
## Running Llama Stack
|
|
|
|
Now you are ready to run Llama Stack with vLLM as the inference provider. You can do this via Conda (build code) or Docker which has a pre-built image.
|
|
|
|
### Via Docker
|
|
|
|
This method allows you to get started quickly without having to build the distribution code.
|
|
|
|
```bash
|
|
export INFERENCE_PORT=8000
|
|
export INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
|
|
export LLAMA_STACK_PORT=5001
|
|
|
|
docker run \
|
|
-it \
|
|
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
|
-v ./run.yaml:/root/my-run.yaml \
|
|
llamastack/distribution-remote-vllm \
|
|
--yaml-config /root/my-run.yaml \
|
|
--port $LLAMA_STACK_PORT \
|
|
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
|
--env VLLM_URL=http://host.docker.internal:$INFERENCE_PORT/v1
|
|
```
|
|
|
|
If you are using Llama Stack Safety / Shield APIs, use:
|
|
|
|
```bash
|
|
export SAFETY_PORT=8081
|
|
export SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
|
|
|
|
# You need a local checkout of llama-stack to run this, get it using
|
|
# git clone https://github.com/meta-llama/llama-stack.git
|
|
cd /path/to/llama-stack
|
|
|
|
docker run \
|
|
-it \
|
|
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
|
-v ~/.llama:/root/.llama \
|
|
-v ./llama_stack/templates/remote-vllm/run-with-safety.yaml:/root/my-run.yaml \
|
|
llamastack/distribution-remote-vllm \
|
|
--yaml-config /root/my-run.yaml \
|
|
--port $LLAMA_STACK_PORT \
|
|
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
|
--env VLLM_URL=http://host.docker.internal:$INFERENCE_PORT/v1 \
|
|
--env SAFETY_MODEL=$SAFETY_MODEL \
|
|
--env SAFETY_VLLM_URL=http://host.docker.internal:$SAFETY_PORT/v1
|
|
```
|
|
|
|
|
|
### Via Conda
|
|
|
|
Make sure you have done `uv pip install llama-stack` and have the Llama Stack CLI available.
|
|
|
|
```bash
|
|
export INFERENCE_PORT=8000
|
|
export INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
|
|
export LLAMA_STACK_PORT=5001
|
|
|
|
cd distributions/remote-vllm
|
|
llama stack build --template remote-vllm --image-type conda
|
|
|
|
llama stack run ./run.yaml \
|
|
--port $LLAMA_STACK_PORT \
|
|
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
|
--env VLLM_URL=http://localhost:$INFERENCE_PORT/v1
|
|
```
|
|
|
|
If you are using Llama Stack Safety / Shield APIs, use:
|
|
|
|
```bash
|
|
export SAFETY_PORT=8081
|
|
export SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
|
|
|
|
llama stack run ./run-with-safety.yaml \
|
|
--port $LLAMA_STACK_PORT \
|
|
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
|
--env VLLM_URL=http://localhost:$INFERENCE_PORT/v1 \
|
|
--env SAFETY_MODEL=$SAFETY_MODEL \
|
|
--env SAFETY_VLLM_URL=http://localhost:$SAFETY_PORT/v1
|
|
```
|