Merge branch 'main' into add-nvidia-inference-adapter

This commit is contained in:
Matthew Farrellee 2024-11-19 10:25:50 -05:00
commit 2a25ace2fa
131 changed files with 3927 additions and 1286 deletions

View file

@ -57,3 +57,17 @@ repos:
# hooks: # hooks:
# - id: markdown-link-check # - id: markdown-link-check
# args: ['--quiet'] # args: ['--quiet']
# - repo: local
# hooks:
# - id: distro-codegen
# name: Distribution Template Codegen
# additional_dependencies:
# - rich
# - pydantic
# entry: python -m llama_stack.scripts.distro_codegen
# language: python
# pass_filenames: false
# require_serial: true
# files: ^llama_stack/templates/.*$
# stages: [manual]

View file

@ -12,6 +12,11 @@ We actively welcome your pull requests.
5. Make sure your code lints. 5. Make sure your code lints.
6. If you haven't already, complete the Contributor License Agreement ("CLA"). 6. If you haven't already, complete the Contributor License Agreement ("CLA").
### Updating Provider Configurations
If you have made changes to a provider's configuration in any form (introducing a new config key, or changing models, etc.), you should run `python llama_stack/scripts/distro_codegen.py` to re-generate various YAML files as well as the documentation. You should not change `docs/source/.../distributions/` files manually as they are auto-generated.
### Building the Documentation ### Building the Documentation
If you are making changes to the documentation at [https://llama-stack.readthedocs.io/en/latest/](https://llama-stack.readthedocs.io/en/latest/), you can use the following command to build the documentation and preview your changes. You will need [Sphinx](https://www.sphinx-doc.org/en/master/) and the readthedocs theme. If you are making changes to the documentation at [https://llama-stack.readthedocs.io/en/latest/](https://llama-stack.readthedocs.io/en/latest/), you can use the following command to build the documentation and preview your changes. You will need [Sphinx](https://www.sphinx-doc.org/en/master/) and the readthedocs theme.
@ -26,6 +31,19 @@ make html
sphinx-autobuild source build/html sphinx-autobuild source build/html
``` ```
## Pre-commit Hooks
We use [pre-commit](https://pre-commit.com/) to run linting and formatting checks on your code. You can install the pre-commit hooks by running:
```bash
$ cd llama-stack
$ conda activate <your-environment>
$ pip install pre-commit
$ pre-commit install
```
After that, pre-commit hooks will run automatically before each commit.
## Contributor License Agreement ("CLA") ## Contributor License Agreement ("CLA")
In order to accept your pull request, we need you to submit a CLA. You only need In order to accept your pull request, we need you to submit a CLA. You only need
to do this once to work on any of Meta's open source projects. to do this once to work on any of Meta's open source projects.

View file

@ -1,4 +1,4 @@
include requirements.txt include requirements.txt
include llama_stack/distribution/*.sh include llama_stack/distribution/*.sh
include llama_stack/cli/scripts/*.sh include llama_stack/cli/scripts/*.sh
include llama_stack/templates/*/build.yaml include llama_stack/templates/*/*.yaml

View file

@ -112,7 +112,7 @@ Please checkout our [Documentations](https://llama-stack.readthedocs.io/en/lates
| Python | [llama-stack-client-python](https://github.com/meta-llama/llama-stack-client-python) | [![PyPI version](https://img.shields.io/pypi/v/llama_stack_client.svg)](https://pypi.org/project/llama_stack_client/) | Python | [llama-stack-client-python](https://github.com/meta-llama/llama-stack-client-python) | [![PyPI version](https://img.shields.io/pypi/v/llama_stack_client.svg)](https://pypi.org/project/llama_stack_client/)
| Swift | [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift) | [![Swift Package Index](https://img.shields.io/endpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2Fmeta-llama%2Fllama-stack-client-swift%2Fbadge%3Ftype%3Dswift-versions)](https://swiftpackageindex.com/meta-llama/llama-stack-client-swift) | Swift | [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift) | [![Swift Package Index](https://img.shields.io/endpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2Fmeta-llama%2Fllama-stack-client-swift%2Fbadge%3Ftype%3Dswift-versions)](https://swiftpackageindex.com/meta-llama/llama-stack-client-swift)
| Node | [llama-stack-client-node](https://github.com/meta-llama/llama-stack-client-node) | [![NPM version](https://img.shields.io/npm/v/llama-stack-client.svg)](https://npmjs.org/package/llama-stack-client) | Node | [llama-stack-client-node](https://github.com/meta-llama/llama-stack-client-node) | [![NPM version](https://img.shields.io/npm/v/llama-stack-client.svg)](https://npmjs.org/package/llama-stack-client)
| Kotlin | [llama-stack-client-kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) | | Kotlin | [llama-stack-client-kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) | [![Maven version](https://img.shields.io/maven-central/v/com.llama.llamastack/llama-stack-client-kotlin)](https://central.sonatype.com/artifact/com.llama.llamastack/llama-stack-client-kotlin)
Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from [python](https://github.com/meta-llama/llama-stack-client-python), [node](https://github.com/meta-llama/llama-stack-client-node), [swift](https://github.com/meta-llama/llama-stack-client-swift), and [kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) programming languages to quickly build your applications. Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from [python](https://github.com/meta-llama/llama-stack-client-python), [node](https://github.com/meta-llama/llama-stack-client-node), [swift](https://github.com/meta-llama/llama-stack-client-swift), and [kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) programming languages to quickly build your applications.

View file

@ -1,5 +1,4 @@
version: '2' version: '2'
built_at: '2024-11-01T17:40:45.325529'
image_name: local image_name: local
name: bedrock name: bedrock
docker_image: null docker_image: null

View file

@ -1,5 +1,4 @@
version: '2' version: '2'
built_at: '2024-10-08T17:40:45.325529'
image_name: local image_name: local
docker_image: null docker_image: null
conda_env: local conda_env: local

View file

@ -1,51 +0,0 @@
version: '2'
built_at: '2024-10-08T17:40:45.325529'
image_name: local
docker_image: null
conda_env: local
apis:
- shields
- agents
- models
- memory
- memory_banks
- inference
- safety
providers:
inference:
- provider_id: fireworks0
provider_type: remote::fireworks
config:
url: https://api.fireworks.ai/inference
# api_key: <ENTER_YOUR_API_KEY>
safety:
safety:
- provider_id: meta0
provider_type: inline::llama-guard
config:
model: Llama-Guard-3-1B
excluded_categories: []
- provider_id: meta1
provider_type: inline::prompt-guard
config:
model: Prompt-Guard-86M
memory:
- provider_id: meta0
provider_type: inline::meta-reference
config: {}
# Uncomment to use weaviate memory provider
# - provider_id: weaviate0
# provider_type: remote::weaviate
# config: {}
agents:
- provider_id: meta0
provider_type: inline::meta-reference
config:
persistence_store:
namespace: null
type: sqlite
db_path: ~/.llama/runtime/kvstore.db
telemetry:
- provider_id: meta0
provider_type: inline::meta-reference
config: {}

View file

@ -0,0 +1 @@
../../llama_stack/templates/fireworks/run.yaml

View file

@ -1,5 +1,4 @@
version: '2' version: '2'
built_at: '2024-10-08T17:40:45.325529'
image_name: local image_name: local
docker_image: null docker_image: null
conda_env: local conda_env: local

View file

@ -0,0 +1 @@
../../llama_stack/templates/meta-reference-gpu/run-with-safety.yaml

View file

@ -1,69 +0,0 @@
version: '2'
built_at: '2024-10-08T17:40:45.325529'
image_name: local
docker_image: null
conda_env: local
apis:
- shields
- agents
- models
- memory
- memory_banks
- inference
- safety
providers:
inference:
- provider_id: inference0
provider_type: inline::meta-reference
config:
model: Llama3.2-3B-Instruct
quantization: null
torch_seed: null
max_seq_len: 4096
max_batch_size: 1
- provider_id: inference1
provider_type: inline::meta-reference
config:
model: Llama-Guard-3-1B
quantization: null
torch_seed: null
max_seq_len: 2048
max_batch_size: 1
safety:
- provider_id: meta0
provider_type: inline::llama-guard
config:
model: Llama-Guard-3-1B
excluded_categories: []
- provider_id: meta1
provider_type: inline::prompt-guard
config:
model: Prompt-Guard-86M
# Uncomment to use prompt guard
# prompt_guard_shield:
# model: Prompt-Guard-86M
memory:
- provider_id: meta0
provider_type: inline::meta-reference
config: {}
# Uncomment to use pgvector
# - provider_id: pgvector
# provider_type: remote::pgvector
# config:
# host: 127.0.0.1
# port: 5432
# db: postgres
# user: postgres
# password: mysecretpassword
agents:
- provider_id: meta0
provider_type: inline::meta-reference
config:
persistence_store:
namespace: null
type: sqlite
db_path: ~/.llama/runtime/agents_store.db
telemetry:
- provider_id: meta0
provider_type: inline::meta-reference
config: {}

View file

@ -0,0 +1 @@
../../llama_stack/templates/meta-reference-gpu/run.yaml

View file

@ -1,5 +1,4 @@
version: '2' version: '2'
built_at: '2024-10-08T17:40:45.325529'
image_name: local image_name: local
docker_image: null docker_image: null
conda_env: local conda_env: local

View file

@ -1,5 +1,4 @@
version: '2' version: '2'
built_at: '2024-10-08T17:40:45.325529'
image_name: local image_name: local
docker_image: null docker_image: null
conda_env: local conda_env: local
@ -13,20 +12,15 @@ apis:
- safety - safety
providers: providers:
inference: inference:
- provider_id: ollama0 - provider_id: ollama
provider_type: remote::ollama provider_type: remote::ollama
config: config:
url: http://127.0.0.1:14343 url: ${env.OLLAMA_URL:http://127.0.0.1:11434}
safety: safety:
- provider_id: meta0 - provider_id: meta0
provider_type: inline::llama-guard provider_type: inline::llama-guard
config: config:
model: Llama-Guard-3-1B
excluded_categories: [] excluded_categories: []
- provider_id: meta1
provider_type: inline::prompt-guard
config:
model: Prompt-Guard-86M
memory: memory:
- provider_id: meta0 - provider_id: meta0
provider_type: inline::meta-reference provider_type: inline::meta-reference
@ -43,3 +37,10 @@ providers:
- provider_id: meta0 - provider_id: meta0
provider_type: inline::meta-reference provider_type: inline::meta-reference
config: {} config: {}
models:
- model_id: ${env.INFERENCE_MODEL:Llama3.2-3B-Instruct}
provider_id: ollama
- model_id: ${env.SAFETY_MODEL:Llama-Guard-3-1B}
provider_id: ollama
shields:
- shield_id: ${env.SAFETY_MODEL:Llama-Guard-3-1B}

View file

@ -1,30 +1,71 @@
services: services:
ollama: ollama:
image: ollama/ollama:latest image: ollama/ollama:latest
network_mode: "host" network_mode: ${NETWORK_MODE:-bridge}
volumes: volumes:
- ollama:/root/.ollama # this solution synchronizes with the docker volume and loads the model rocket fast - ~/.ollama:/root/.ollama
ports: ports:
- "11434:11434" - "11434:11434"
environment:
OLLAMA_DEBUG: 1
command: [] command: []
deploy:
resources:
limits:
memory: 8G # Set maximum memory
reservations:
memory: 8G # Set minimum memory reservation
# healthcheck:
# # ugh, no CURL in ollama image
# test: ["CMD", "curl", "-f", "http://ollama:11434"]
# interval: 10s
# timeout: 5s
# retries: 5
ollama-init:
image: ollama/ollama:latest
depends_on:
- ollama
# condition: service_healthy
network_mode: ${NETWORK_MODE:-bridge}
environment:
- OLLAMA_HOST=ollama
- INFERENCE_MODEL=${INFERENCE_MODEL}
- SAFETY_MODEL=${SAFETY_MODEL:-}
volumes:
- ~/.ollama:/root/.ollama
- ./pull-models.sh:/pull-models.sh
entrypoint: ["/pull-models.sh"]
llamastack: llamastack:
depends_on: depends_on:
- ollama ollama:
image: llamastack/distribution-ollama condition: service_started
network_mode: "host" ollama-init:
condition: service_started
image: ${LLAMA_STACK_IMAGE:-llamastack/distribution-ollama}
network_mode: ${NETWORK_MODE:-bridge}
volumes: volumes:
- ~/.llama:/root/.llama - ~/.llama:/root/.llama
# Link to ollama run.yaml file # Link to ollama run.yaml file
- ./run.yaml:/root/my-run.yaml - ~/local/llama-stack/:/app/llama-stack-source
- ./run${SAFETY_MODEL:+-with-safety}.yaml:/root/my-run.yaml
ports: ports:
- "5000:5000" - "${LLAMA_STACK_PORT:-5001}:${LLAMA_STACK_PORT:-5001}"
# Hack: wait for ollama server to start before starting docker environment:
entrypoint: bash -c "sleep 60; python -m llama_stack.distribution.server.server --yaml_config /root/my-run.yaml" - INFERENCE_MODEL=${INFERENCE_MODEL}
- SAFETY_MODEL=${SAFETY_MODEL:-}
- OLLAMA_URL=http://ollama:11434
entrypoint: >
python -m llama_stack.distribution.server.server /root/my-run.yaml \
--port ${LLAMA_STACK_PORT:-5001}
deploy: deploy:
restart_policy: restart_policy:
condition: on-failure condition: on-failure
delay: 3s delay: 10s
max_attempts: 5 max_attempts: 3
window: 60s window: 60s
volumes: volumes:
ollama: ollama:
ollama-init:
llamastack:

View file

@ -0,0 +1,18 @@
#!/bin/sh
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
echo "Preloading (${INFERENCE_MODEL}, ${SAFETY_MODEL})..."
for model in ${INFERENCE_MODEL} ${SAFETY_MODEL}; do
echo "Preloading $model..."
if ! ollama run "$model"; then
echo "Failed to pull and run $model"
exit 1
fi
done
echo "All models pulled successfully"

View file

@ -0,0 +1 @@
../../llama_stack/templates/ollama/run-with-safety.yaml

View file

@ -1,45 +0,0 @@
version: '2'
built_at: '2024-10-08T17:40:45.325529'
image_name: local
docker_image: null
conda_env: local
apis:
- shields
- agents
- models
- memory
- memory_banks
- inference
- safety
providers:
inference:
- provider_id: ollama0
provider_type: remote::ollama
config:
url: http://127.0.0.1:14343
safety:
- provider_id: meta0
provider_type: inline::llama-guard
config:
model: Llama-Guard-3-1B
excluded_categories: []
- provider_id: meta1
provider_type: inline::prompt-guard
config:
model: Prompt-Guard-86M
memory:
- provider_id: meta0
provider_type: inline::meta-reference
config: {}
agents:
- provider_id: meta0
provider_type: inline::meta-reference
config:
persistence_store:
namespace: null
type: sqlite
db_path: ~/.llama/runtime/kvstore.db
telemetry:
- provider_id: meta0
provider_type: inline::meta-reference
config: {}

View file

@ -0,0 +1 @@
../../llama_stack/templates/ollama/run.yaml

View file

@ -1,33 +1,28 @@
# NOTES:
#
# This Docker Compose (and the associated run.yaml) assumes you will be
# running in the default "bridged" network mode.
#
# If you need "host" network mode, please uncomment
# - network_mode: "host"
#
# Similarly change "host.docker.internal" to "localhost" in the run.yaml file
#
services: services:
vllm-0: vllm-inference:
image: vllm/vllm-openai:latest image: vllm/vllm-openai:latest
volumes: volumes:
- $HOME/.cache/huggingface:/root/.cache/huggingface - $HOME/.cache/huggingface:/root/.cache/huggingface
# network_mode: "host" network_mode: ${NETWORK_MODE:-bridged}
ports: ports:
- "5100:5100" - "${VLLM_INFERENCE_PORT:-5100}:${VLLM_INFERENCE_PORT:-5100}"
devices: devices:
- nvidia.com/gpu=all - nvidia.com/gpu=all
environment: environment:
- CUDA_VISIBLE_DEVICES=0 - CUDA_VISIBLE_DEVICES=${VLLM_INFERENCE_GPU:-0}
- HUGGING_FACE_HUB_TOKEN=$HF_TOKEN - HUGGING_FACE_HUB_TOKEN=$HF_TOKEN
command: > command: >
--gpu-memory-utilization 0.75 --gpu-memory-utilization 0.75
--model meta-llama/Llama-3.1-8B-Instruct --model ${VLLM_INFERENCE_MODEL:-meta-llama/Llama-3.2-3B-Instruct}
--enforce-eager --enforce-eager
--max-model-len 8192 --max-model-len 8192
--max-num-seqs 16 --max-num-seqs 16
--port 5100 --port ${VLLM_INFERENCE_PORT:-5100}
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:${VLLM_INFERENCE_PORT:-5100}/v1/health"]
interval: 30s
timeout: 10s
retries: 5
deploy: deploy:
resources: resources:
reservations: reservations:
@ -35,25 +30,34 @@ services:
- driver: nvidia - driver: nvidia
capabilities: [gpu] capabilities: [gpu]
runtime: nvidia runtime: nvidia
vllm-1:
# A little trick:
# if VLLM_SAFETY_MODEL is set, we will create a service for the safety model
# otherwise, the entry will end in a hyphen which gets ignored by docker compose
vllm-${VLLM_SAFETY_MODEL:+safety}:
image: vllm/vllm-openai:latest image: vllm/vllm-openai:latest
volumes: volumes:
- $HOME/.cache/huggingface:/root/.cache/huggingface - $HOME/.cache/huggingface:/root/.cache/huggingface
# network_mode: "host" network_mode: ${NETWORK_MODE:-bridged}
ports: ports:
- "5101:5101" - "${VLLM_SAFETY_PORT:-5101}:${VLLM_SAFETY_PORT:-5101}"
devices: devices:
- nvidia.com/gpu=all - nvidia.com/gpu=all
environment: environment:
- CUDA_VISIBLE_DEVICES=1 - CUDA_VISIBLE_DEVICES=${VLLM_SAFETY_GPU:-1}
- HUGGING_FACE_HUB_TOKEN=$HF_TOKEN - HUGGING_FACE_HUB_TOKEN=$HF_TOKEN
command: > command: >
--gpu-memory-utilization 0.75 --gpu-memory-utilization 0.75
--model meta-llama/Llama-Guard-3-1B --model ${VLLM_SAFETY_MODEL}
--enforce-eager --enforce-eager
--max-model-len 8192 --max-model-len 8192
--max-num-seqs 16 --max-num-seqs 16
--port 5101 --port ${VLLM_SAFETY_PORT:-5101}
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:${VLLM_SAFETY_PORT:-5101}/v1/health"]
interval: 30s
timeout: 10s
retries: 5
deploy: deploy:
resources: resources:
reservations: reservations:
@ -63,23 +67,25 @@ services:
runtime: nvidia runtime: nvidia
llamastack: llamastack:
depends_on: depends_on:
- vllm-0 - vllm-inference:
- vllm-1 condition: service_healthy
# image: llamastack/distribution-remote-vllm - vllm-${VLLM_SAFETY_MODEL:+safety}:
condition: service_healthy
# image: llamastack/distribution-remote-vllm
image: llamastack/distribution-remote-vllm:test-0.0.52rc3 image: llamastack/distribution-remote-vllm:test-0.0.52rc3
volumes: volumes:
- ~/.llama:/root/.llama - ~/.llama:/root/.llama
- ~/local/llama-stack/distributions/remote-vllm/run.yaml:/root/llamastack-run-remote-vllm.yaml - ./run${VLLM_SAFETY_MODEL:+-with-safety}.yaml:/root/llamastack-run-remote-vllm.yaml
# network_mode: "host" network_mode: ${NETWORK_MODE:-bridged}
environment: environment:
- LLAMA_INFERENCE_VLLM_URL=${LLAMA_INFERENCE_VLLM_URL:-http://host.docker.internal:5100/v1} - VLLM_URL=http://vllm-inference:${VLLM_INFERENCE_PORT:-5100}/v1
- LLAMA_INFERENCE_MODEL=${LLAMA_INFERENCE_MODEL:-Llama3.1-8B-Instruct} - VLLM_SAFETY_URL=http://vllm-safety:${VLLM_SAFETY_PORT:-5101}/v1
- INFERENCE_MODEL=${INFERENCE_MODEL:-meta-llama/Llama-3.2-3B-Instruct}
- MAX_TOKENS=${MAX_TOKENS:-4096} - MAX_TOKENS=${MAX_TOKENS:-4096}
- SQLITE_STORE_DIR=${SQLITE_STORE_DIR:-$HOME/.llama/distributions/remote-vllm} - SQLITE_STORE_DIR=${SQLITE_STORE_DIR:-$HOME/.llama/distributions/remote-vllm}
- LLAMA_SAFETY_VLLM_URL=${LLAMA_SAFETY_VLLM_URL:-http://host.docker.internal:5101/v1} - SAFETY_MODEL=${SAFETY_MODEL:-meta-llama/Llama-Guard-3-1B}
- LLAMA_SAFETY_MODEL=${LLAMA_SAFETY_MODEL:-Llama-Guard-3-1B}
ports: ports:
- "5001:5001" - "${LLAMASTACK_PORT:-5001}:${LLAMASTACK_PORT:-5001}"
# Hack: wait for vLLM server to start before starting docker # Hack: wait for vLLM server to start before starting docker
entrypoint: bash -c "sleep 60; python -m llama_stack.distribution.server.server --yaml_config /root/llamastack-run-remote-vllm.yaml --port 5001" entrypoint: bash -c "sleep 60; python -m llama_stack.distribution.server.server --yaml_config /root/llamastack-run-remote-vllm.yaml --port 5001"
deploy: deploy:
@ -89,6 +95,6 @@ services:
max_attempts: 5 max_attempts: 5
window: 60s window: 60s
volumes: volumes:
vllm-0: vllm-inference:
vllm-1: vllm-safety:
llamastack: llamastack:

View file

@ -0,0 +1 @@
../../llama_stack/templates/remote-vllm/run-with-safety.yaml

View file

@ -1,68 +0,0 @@
version: '2'
built_at: '2024-11-11T20:09:45.988375'
image_name: remote-vllm
docker_image: remote-vllm
conda_env: null
apis:
- inference
- memory
- safety
- agents
- telemetry
providers:
inference:
# serves main inference model
- provider_id: vllm-0
provider_type: remote::vllm
config:
# NOTE: replace with "localhost" if you are running in "host" network mode
url: ${env.LLAMA_INFERENCE_VLLM_URL:http://host.docker.internal:5100/v1}
max_tokens: ${env.MAX_TOKENS:4096}
api_token: fake
# serves safety llama_guard model
- provider_id: vllm-1
provider_type: remote::vllm
config:
# NOTE: replace with "localhost" if you are running in "host" network mode
url: ${env.LLAMA_SAFETY_VLLM_URL:http://host.docker.internal:5101/v1}
max_tokens: ${env.MAX_TOKENS:4096}
api_token: fake
memory:
- provider_id: faiss-0
provider_type: inline::faiss
config:
kvstore:
namespace: null
type: sqlite
db_path: "${env.SQLITE_STORE_DIR:/home/ashwin/.llama/distributions/remote-vllm}/faiss_store.db"
safety:
- provider_id: llama-guard
provider_type: inline::llama-guard
config: {}
memory:
- provider_id: meta0
provider_type: inline::faiss
config: {}
agents:
- provider_id: meta0
provider_type: inline::meta-reference
config:
persistence_store:
namespace: null
type: sqlite
db_path: "${env.SQLITE_STORE_DIR:/home/ashwin/.llama/distributions/remote-vllm}/agents_store.db"
telemetry:
- provider_id: meta0
provider_type: inline::meta-reference
config: {}
metadata_store:
namespace: null
type: sqlite
db_path: "${env.SQLITE_STORE_DIR:/home/ashwin/.llama/distributions/remote-vllm}/registry.db"
models:
- model_id: ${env.LLAMA_INFERENCE_MODEL:Llama3.1-8B-Instruct}
provider_id: vllm-0
- model_id: ${env.LLAMA_SAFETY_MODEL:Llama-Guard-3-1B}
provider_id: vllm-1
shields:
- shield_id: ${env.LLAMA_SAFETY_MODEL:Llama-Guard-3-1B}

View file

@ -0,0 +1 @@
../../llama_stack/templates/remote-vllm/run.yaml

View file

@ -1,51 +1,89 @@
services: services:
text-generation-inference: tgi-inference:
image: ghcr.io/huggingface/text-generation-inference:latest image: ghcr.io/huggingface/text-generation-inference:latest
network_mode: "host"
volumes: volumes:
- $HOME/.cache/huggingface:/data - $HOME/.cache/huggingface:/data
network_mode: ${NETWORK_MODE:-bridged}
ports: ports:
- "5009:5009" - "${TGI_INFERENCE_PORT:-8080}:${TGI_INFERENCE_PORT:-8080}"
devices: devices:
- nvidia.com/gpu=all - nvidia.com/gpu=all
environment: environment:
- CUDA_VISIBLE_DEVICES=0 - CUDA_VISIBLE_DEVICES=${TGI_INFERENCE_GPU:-0}
- HF_TOKEN=$HF_TOKEN
- HF_HOME=/data - HF_HOME=/data
- HF_DATASETS_CACHE=/data - HF_DATASETS_CACHE=/data
- HF_MODULES_CACHE=/data - HF_MODULES_CACHE=/data
- HF_HUB_CACHE=/data - HF_HUB_CACHE=/data
command: ["--dtype", "bfloat16", "--usage-stats", "on", "--sharded", "false", "--model-id", "meta-llama/Llama-3.1-8B-Instruct", "--port", "5009", "--cuda-memory-fraction", "0.3"] command: >
--dtype bfloat16
--usage-stats off
--sharded false
--model-id ${TGI_INFERENCE_MODEL:-meta-llama/Llama-3.2-3B-Instruct}
--port ${TGI_INFERENCE_PORT:-8080}
--cuda-memory-fraction 0.75
healthcheck:
test: ["CMD", "curl", "-f", "http://tgi-inference:${TGI_INFERENCE_PORT:-8080}/health"]
interval: 5s
timeout: 5s
retries: 30
deploy: deploy:
resources: resources:
reservations: reservations:
devices: devices:
- driver: nvidia - driver: nvidia
# that's the closest analogue to --gpus; provide
# an integer amount of devices or 'all'
count: 1
# Devices are reserved using a list of capabilities, making
# capabilities the only required field. A device MUST
# satisfy all the requested capabilities for a successful
# reservation.
capabilities: [gpu] capabilities: [gpu]
runtime: nvidia runtime: nvidia
tgi-${TGI_SAFETY_MODEL:+safety}:
image: ghcr.io/huggingface/text-generation-inference:latest
volumes:
- $HOME/.cache/huggingface:/data
network_mode: ${NETWORK_MODE:-bridged}
ports:
- "${TGI_SAFETY_PORT:-8081}:${TGI_SAFETY_PORT:-8081}"
devices:
- nvidia.com/gpu=all
environment:
- CUDA_VISIBLE_DEVICES=${TGI_SAFETY_GPU:-1}
- HF_TOKEN=$HF_TOKEN
- HF_HOME=/data
- HF_DATASETS_CACHE=/data
- HF_MODULES_CACHE=/data
- HF_HUB_CACHE=/data
command: >
--dtype bfloat16
--usage-stats off
--sharded false
--model-id ${TGI_SAFETY_MODEL:-meta-llama/Llama-Guard-3-1B}
--port ${TGI_SAFETY_PORT:-8081}
--cuda-memory-fraction 0.75
healthcheck: healthcheck:
test: ["CMD", "curl", "-f", "http://text-generation-inference:5009/health"] test: ["CMD", "curl", "-f", "http://tgi-safety:${TGI_SAFETY_PORT:-8081}/health"]
interval: 5s interval: 5s
timeout: 5s timeout: 5s
retries: 30 retries: 30
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [gpu]
runtime: nvidia
llamastack: llamastack:
depends_on: depends_on:
text-generation-inference: tgi-inference:
condition: service_healthy condition: service_healthy
image: llamastack/distribution-tgi tgi-${TGI_SAFETY_MODEL:+safety}:
network_mode: "host" condition: service_healthy
image: llamastack/distribution-tgi:test-0.0.52rc3
network_mode: ${NETWORK_MODE:-bridged}
volumes: volumes:
- ~/.llama:/root/.llama - ~/.llama:/root/.llama
# Link to TGI run.yaml file - ./run${TGI_SAFETY_MODEL:+-with-safety}.yaml:/root/my-run.yaml
- ./run.yaml:/root/my-run.yaml
ports: ports:
- "5000:5000" - "${LLAMA_STACK_PORT:-5001}:${LLAMA_STACK_PORT:-5001}"
# Hack: wait for TGI server to start before starting docker # Hack: wait for TGI server to start before starting docker
entrypoint: bash -c "sleep 60; python -m llama_stack.distribution.server.server --yaml_config /root/my-run.yaml" entrypoint: bash -c "sleep 60; python -m llama_stack.distribution.server.server --yaml_config /root/my-run.yaml"
restart_policy: restart_policy:
@ -53,3 +91,13 @@ services:
delay: 3s delay: 3s
max_attempts: 5 max_attempts: 5
window: 60s window: 60s
environment:
- TGI_URL=http://tgi-inference:${TGI_INFERENCE_PORT:-8080}
- SAFETY_TGI_URL=http://tgi-safety:${TGI_SAFETY_PORT:-8081}
- INFERENCE_MODEL=${INFERENCE_MODEL:-meta-llama/Llama-3.2-3B-Instruct}
- SAFETY_MODEL=${SAFETY_MODEL:-meta-llama/Llama-Guard-3-1B}
volumes:
tgi-inference:
tgi-safety:
llamastack:

View file

@ -0,0 +1 @@
../../llama_stack/templates/tgi/run-with-safety.yaml

View file

@ -1,45 +0,0 @@
version: '2'
built_at: '2024-10-08T17:40:45.325529'
image_name: local
docker_image: null
conda_env: local
apis:
- shields
- agents
- models
- memory
- memory_banks
- inference
- safety
providers:
inference:
- provider_id: tgi0
provider_type: remote::tgi
config:
url: http://127.0.0.1:5009
safety:
- provider_id: meta0
provider_type: inline::llama-guard
config:
model: Llama-Guard-3-1B
excluded_categories: []
- provider_id: meta1
provider_type: inline::prompt-guard
config:
model: Prompt-Guard-86M
memory:
- provider_id: meta0
provider_type: inline::meta-reference
config: {}
agents:
- provider_id: meta0
provider_type: inline::meta-reference
config:
persistence_store:
namespace: null
type: sqlite
db_path: ~/.llama/runtime/kvstore.db
telemetry:
- provider_id: meta0
provider_type: inline::meta-reference
config: {}

1
distributions/tgi/run.yaml Symbolic link
View file

@ -0,0 +1 @@
../../llama_stack/templates/tgi/run.yaml

View file

@ -1,46 +0,0 @@
version: '2'
built_at: '2024-10-08T17:40:45.325529'
image_name: local
docker_image: null
conda_env: local
apis:
- shields
- agents
- models
- memory
- memory_banks
- inference
- safety
providers:
inference:
- provider_id: together0
provider_type: remote::together
config:
url: https://api.together.xyz/v1
# api_key: <ENTER_YOUR_API_KEY>
safety:
- provider_id: meta0
provider_type: inline::llama-guard
config:
model: Llama-Guard-3-1B
excluded_categories: []
- provider_id: meta1
provider_type: inline::prompt-guard
config:
model: Prompt-Guard-86M
memory:
- provider_id: meta0
provider_type: remote::weaviate
config: {}
agents:
- provider_id: meta0
provider_type: inline::meta-reference
config:
persistence_store:
namespace: null
type: sqlite
db_path: ~/.llama/runtime/kvstore.db
telemetry:
- provider_id: meta0
provider_type: inline::meta-reference
config: {}

View file

@ -0,0 +1 @@
../../llama_stack/templates/together/run.yaml

View file

@ -31,7 +31,10 @@ from .strong_typing.schema import json_schema_type
schema_utils.json_schema_type = json_schema_type schema_utils.json_schema_type = json_schema_type
from llama_stack.distribution.stack import LlamaStack # this line needs to be here to ensure json_schema_type has been altered before
# the imports use the annotation
from llama_stack.apis.version import LLAMA_STACK_API_VERSION # noqa: E402
from llama_stack.distribution.stack import LlamaStack # noqa: E402
def main(output_dir: str): def main(output_dir: str):
@ -50,7 +53,7 @@ def main(output_dir: str):
server=Server(url="http://any-hosted-llama-stack.com"), server=Server(url="http://any-hosted-llama-stack.com"),
info=Info( info=Info(
title="[DRAFT] Llama Stack Specification", title="[DRAFT] Llama Stack Specification",
version="0.0.1", version=LLAMA_STACK_API_VERSION,
description="""This is the specification of the llama stack that provides description="""This is the specification of the llama stack that provides
a set of endpoints and their corresponding interfaces that are tailored to a set of endpoints and their corresponding interfaces that are tailored to
best leverage Llama Models. The specification is still in draft and subject to change. best leverage Llama Models. The specification is still in draft and subject to change.

View file

@ -202,7 +202,9 @@ class ContentBuilder:
) -> MediaType: ) -> MediaType:
schema = self.schema_builder.classdef_to_ref(item_type) schema = self.schema_builder.classdef_to_ref(item_type)
if self.schema_transformer: if self.schema_transformer:
schema_transformer: Callable[[SchemaOrRef], SchemaOrRef] = self.schema_transformer # type: ignore schema_transformer: Callable[[SchemaOrRef], SchemaOrRef] = (
self.schema_transformer
)
schema = schema_transformer(schema) schema = schema_transformer(schema)
if not examples: if not examples:
@ -630,6 +632,7 @@ class Generator:
raise NotImplementedError(f"unknown HTTP method: {op.http_method}") raise NotImplementedError(f"unknown HTTP method: {op.http_method}")
route = op.get_route() route = op.get_route()
print(f"route: {route}")
if route in paths: if route in paths:
paths[route].update(pathItem) paths[route].update(pathItem)
else: else:

View file

@ -12,6 +12,8 @@ import uuid
from dataclasses import dataclass from dataclasses import dataclass
from typing import Any, Callable, Dict, Iterable, Iterator, List, Optional, Tuple, Union from typing import Any, Callable, Dict, Iterable, Iterator, List, Optional, Tuple, Union
from llama_stack.apis.version import LLAMA_STACK_API_VERSION
from termcolor import colored from termcolor import colored
from ..strong_typing.inspection import ( from ..strong_typing.inspection import (
@ -111,9 +113,12 @@ class EndpointOperation:
def get_route(self) -> str: def get_route(self) -> str:
if self.route is not None: if self.route is not None:
return self.route assert (
"_" not in self.route
), f"route should not contain underscores: {self.route}"
return "/".join(["", LLAMA_STACK_API_VERSION, self.route.lstrip("/")])
route_parts = ["", self.name] route_parts = ["", LLAMA_STACK_API_VERSION, self.name]
for param_name, _ in self.path_params: for param_name, _ in self.path_params:
route_parts.append("{" + param_name + "}") route_parts.append("{" + param_name + "}")
return "/".join(route_parts) return "/".join(route_parts)

View file

@ -20,8 +20,8 @@
"openapi": "3.1.0", "openapi": "3.1.0",
"info": { "info": {
"title": "[DRAFT] Llama Stack Specification", "title": "[DRAFT] Llama Stack Specification",
"version": "0.0.1", "version": "alpha",
"description": "This is the specification of the llama stack that provides\n a set of endpoints and their corresponding interfaces that are tailored to\n best leverage Llama Models. The specification is still in draft and subject to change.\n Generated at 2024-11-14 17:04:24.301559" "description": "This is the specification of the llama stack that provides\n a set of endpoints and their corresponding interfaces that are tailored to\n best leverage Llama Models. The specification is still in draft and subject to change.\n Generated at 2024-11-18 23:37:24.867143"
}, },
"servers": [ "servers": [
{ {
@ -29,7 +29,7 @@
} }
], ],
"paths": { "paths": {
"/batch_inference/chat_completion": { "/alpha/batch-inference/chat-completion": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -69,7 +69,7 @@
} }
} }
}, },
"/batch_inference/completion": { "/alpha/batch-inference/completion": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -109,7 +109,7 @@
} }
} }
}, },
"/post_training/job/cancel": { "/alpha/post-training/job/cancel": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -142,7 +142,7 @@
} }
} }
}, },
"/inference/chat_completion": { "/alpha/inference/chat-completion": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -189,7 +189,7 @@
} }
} }
}, },
"/inference/completion": { "/alpha/inference/completion": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -236,7 +236,7 @@
} }
} }
}, },
"/agents/create": { "/alpha/agents/create": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -276,7 +276,7 @@
} }
} }
}, },
"/agents/session/create": { "/alpha/agents/session/create": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -316,7 +316,7 @@
} }
} }
}, },
"/agents/turn/create": { "/alpha/agents/turn/create": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -363,7 +363,7 @@
} }
} }
}, },
"/agents/delete": { "/alpha/agents/delete": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -396,7 +396,7 @@
} }
} }
}, },
"/agents/session/delete": { "/alpha/agents/session/delete": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -429,7 +429,7 @@
} }
} }
}, },
"/inference/embeddings": { "/alpha/inference/embeddings": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -469,7 +469,7 @@
} }
} }
}, },
"/eval/evaluate_rows": { "/alpha/eval/evaluate-rows": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -509,7 +509,7 @@
} }
} }
}, },
"/agents/session/get": { "/alpha/agents/session/get": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -565,7 +565,7 @@
} }
} }
}, },
"/agents/step/get": { "/alpha/agents/step/get": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -627,7 +627,7 @@
] ]
} }
}, },
"/agents/turn/get": { "/alpha/agents/turn/get": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -681,7 +681,7 @@
] ]
} }
}, },
"/datasets/get": { "/alpha/datasets/get": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -726,7 +726,7 @@
] ]
} }
}, },
"/eval_tasks/get": { "/alpha/eval-tasks/get": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -771,7 +771,7 @@
] ]
} }
}, },
"/memory_banks/get": { "/alpha/memory-banks/get": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -829,7 +829,7 @@
] ]
} }
}, },
"/models/get": { "/alpha/models/get": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -874,7 +874,7 @@
] ]
} }
}, },
"/datasetio/get_rows_paginated": { "/alpha/datasetio/get-rows-paginated": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -936,7 +936,7 @@
] ]
} }
}, },
"/scoring_functions/get": { "/alpha/scoring-functions/get": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -981,7 +981,7 @@
] ]
} }
}, },
"/shields/get": { "/alpha/shields/get": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1026,7 +1026,7 @@
] ]
} }
}, },
"/telemetry/get_trace": { "/alpha/telemetry/get-trace": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1064,7 +1064,7 @@
] ]
} }
}, },
"/post_training/job/artifacts": { "/alpha/post-training/job/artifacts": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1102,7 +1102,7 @@
] ]
} }
}, },
"/post_training/job/logs": { "/alpha/post-training/job/logs": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1140,7 +1140,7 @@
] ]
} }
}, },
"/post_training/job/status": { "/alpha/post-training/job/status": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1178,7 +1178,7 @@
] ]
} }
}, },
"/post_training/jobs": { "/alpha/post-training/jobs": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1208,7 +1208,7 @@
] ]
} }
}, },
"/health": { "/alpha/health": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1238,7 +1238,7 @@
] ]
} }
}, },
"/memory/insert": { "/alpha/memory/insert": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -1271,7 +1271,7 @@
} }
} }
}, },
"/eval/job/cancel": { "/alpha/eval/job/cancel": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -1304,7 +1304,7 @@
} }
} }
}, },
"/eval/job/result": { "/alpha/eval/job/result": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1350,7 +1350,7 @@
] ]
} }
}, },
"/eval/job/status": { "/alpha/eval/job/status": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1403,7 +1403,7 @@
] ]
} }
}, },
"/datasets/list": { "/alpha/datasets/list": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1433,7 +1433,7 @@
] ]
} }
}, },
"/eval_tasks/list": { "/alpha/eval-tasks/list": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1463,7 +1463,7 @@
] ]
} }
}, },
"/memory_banks/list": { "/alpha/memory-banks/list": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1506,7 +1506,7 @@
] ]
} }
}, },
"/models/list": { "/alpha/models/list": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1536,7 +1536,7 @@
] ]
} }
}, },
"/providers/list": { "/alpha/providers/list": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1569,7 +1569,7 @@
] ]
} }
}, },
"/routes/list": { "/alpha/routes/list": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1605,7 +1605,7 @@
] ]
} }
}, },
"/scoring_functions/list": { "/alpha/scoring-functions/list": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1635,7 +1635,7 @@
] ]
} }
}, },
"/shields/list": { "/alpha/shields/list": {
"get": { "get": {
"responses": { "responses": {
"200": { "200": {
@ -1665,7 +1665,7 @@
] ]
} }
}, },
"/telemetry/log_event": { "/alpha/telemetry/log-event": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -1698,7 +1698,7 @@
} }
} }
}, },
"/post_training/preference_optimize": { "/alpha/post-training/preference-optimize": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -1738,7 +1738,7 @@
} }
} }
}, },
"/memory/query": { "/alpha/memory/query": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -1778,7 +1778,7 @@
} }
} }
}, },
"/datasets/register": { "/alpha/datasets/register": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -1811,7 +1811,7 @@
} }
} }
}, },
"/eval_tasks/register": { "/alpha/eval-tasks/register": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -1844,7 +1844,7 @@
} }
} }
}, },
"/memory_banks/register": { "/alpha/memory-banks/register": {
"post": { "post": {
"responses": {}, "responses": {},
"tags": [ "tags": [
@ -1873,7 +1873,7 @@
} }
} }
}, },
"/models/register": { "/alpha/models/register": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -1913,7 +1913,7 @@
} }
} }
}, },
"/scoring_functions/register": { "/alpha/scoring-functions/register": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -1946,7 +1946,7 @@
} }
} }
}, },
"/shields/register": { "/alpha/shields/register": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -1986,7 +1986,7 @@
} }
} }
}, },
"/eval/run_eval": { "/alpha/eval/run-eval": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -2026,7 +2026,7 @@
} }
} }
}, },
"/safety/run_shield": { "/alpha/safety/run-shield": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -2066,7 +2066,7 @@
} }
} }
}, },
"/scoring/score": { "/alpha/scoring/score": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -2106,7 +2106,7 @@
} }
} }
}, },
"/scoring/score_batch": { "/alpha/scoring/score-batch": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -2146,7 +2146,7 @@
} }
} }
}, },
"/post_training/supervised_fine_tune": { "/alpha/post-training/supervised-fine-tune": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -2186,7 +2186,7 @@
} }
} }
}, },
"/synthetic_data_generation/generate": { "/alpha/synthetic-data-generation/generate": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -2226,7 +2226,7 @@
} }
} }
}, },
"/memory_banks/unregister": { "/alpha/memory-banks/unregister": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {
@ -2259,7 +2259,7 @@
} }
} }
}, },
"/models/unregister": { "/alpha/models/unregister": {
"post": { "post": {
"responses": { "responses": {
"200": { "200": {

View file

@ -3400,13 +3400,13 @@ info:
description: "This is the specification of the llama stack that provides\n \ description: "This is the specification of the llama stack that provides\n \
\ a set of endpoints and their corresponding interfaces that are tailored\ \ a set of endpoints and their corresponding interfaces that are tailored\
\ to\n best leverage Llama Models. The specification is still in\ \ to\n best leverage Llama Models. The specification is still in\
\ draft and subject to change.\n Generated at 2024-11-14 17:04:24.301559" \ draft and subject to change.\n Generated at 2024-11-18 23:37:24.867143"
title: '[DRAFT] Llama Stack Specification' title: '[DRAFT] Llama Stack Specification'
version: 0.0.1 version: alpha
jsonSchemaDialect: https://json-schema.org/draft/2020-12/schema jsonSchemaDialect: https://json-schema.org/draft/2020-12/schema
openapi: 3.1.0 openapi: 3.1.0
paths: paths:
/agents/create: /alpha/agents/create:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3431,7 +3431,7 @@ paths:
description: OK description: OK
tags: tags:
- Agents - Agents
/agents/delete: /alpha/agents/delete:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3452,7 +3452,7 @@ paths:
description: OK description: OK
tags: tags:
- Agents - Agents
/agents/session/create: /alpha/agents/session/create:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3477,7 +3477,7 @@ paths:
description: OK description: OK
tags: tags:
- Agents - Agents
/agents/session/delete: /alpha/agents/session/delete:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3498,7 +3498,7 @@ paths:
description: OK description: OK
tags: tags:
- Agents - Agents
/agents/session/get: /alpha/agents/session/get:
post: post:
parameters: parameters:
- in: query - in: query
@ -3533,7 +3533,7 @@ paths:
description: OK description: OK
tags: tags:
- Agents - Agents
/agents/step/get: /alpha/agents/step/get:
get: get:
parameters: parameters:
- in: query - in: query
@ -3572,7 +3572,7 @@ paths:
description: OK description: OK
tags: tags:
- Agents - Agents
/agents/turn/create: /alpha/agents/turn/create:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3600,7 +3600,7 @@ paths:
streamed agent turn completion response. streamed agent turn completion response.
tags: tags:
- Agents - Agents
/agents/turn/get: /alpha/agents/turn/get:
get: get:
parameters: parameters:
- in: query - in: query
@ -3634,7 +3634,7 @@ paths:
description: OK description: OK
tags: tags:
- Agents - Agents
/batch_inference/chat_completion: /alpha/batch-inference/chat-completion:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3659,7 +3659,7 @@ paths:
description: OK description: OK
tags: tags:
- BatchInference - BatchInference
/batch_inference/completion: /alpha/batch-inference/completion:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3684,7 +3684,7 @@ paths:
description: OK description: OK
tags: tags:
- BatchInference - BatchInference
/datasetio/get_rows_paginated: /alpha/datasetio/get-rows-paginated:
get: get:
parameters: parameters:
- in: query - in: query
@ -3723,7 +3723,7 @@ paths:
description: OK description: OK
tags: tags:
- DatasetIO - DatasetIO
/datasets/get: /alpha/datasets/get:
get: get:
parameters: parameters:
- in: query - in: query
@ -3749,7 +3749,7 @@ paths:
description: OK description: OK
tags: tags:
- Datasets - Datasets
/datasets/list: /alpha/datasets/list:
get: get:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3768,7 +3768,7 @@ paths:
description: OK description: OK
tags: tags:
- Datasets - Datasets
/datasets/register: /alpha/datasets/register:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3789,7 +3789,73 @@ paths:
description: OK description: OK
tags: tags:
- Datasets - Datasets
/eval/evaluate_rows: /alpha/eval-tasks/get:
get:
parameters:
- in: query
name: name
required: true
schema:
type: string
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
responses:
'200':
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/EvalTask'
- type: 'null'
description: OK
tags:
- EvalTasks
/alpha/eval-tasks/list:
get:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
responses:
'200':
content:
application/jsonl:
schema:
$ref: '#/components/schemas/EvalTask'
description: OK
tags:
- EvalTasks
/alpha/eval-tasks/register:
post:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/RegisterEvalTaskRequest'
required: true
responses:
'200':
description: OK
tags:
- EvalTasks
/alpha/eval/evaluate-rows:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3814,7 +3880,7 @@ paths:
description: OK description: OK
tags: tags:
- Eval - Eval
/eval/job/cancel: /alpha/eval/job/cancel:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3835,7 +3901,7 @@ paths:
description: OK description: OK
tags: tags:
- Eval - Eval
/eval/job/result: /alpha/eval/job/result:
get: get:
parameters: parameters:
- in: query - in: query
@ -3864,7 +3930,7 @@ paths:
description: OK description: OK
tags: tags:
- Eval - Eval
/eval/job/status: /alpha/eval/job/status:
get: get:
parameters: parameters:
- in: query - in: query
@ -3895,7 +3961,7 @@ paths:
description: OK description: OK
tags: tags:
- Eval - Eval
/eval/run_eval: /alpha/eval/run-eval:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -3920,73 +3986,7 @@ paths:
description: OK description: OK
tags: tags:
- Eval - Eval
/eval_tasks/get: /alpha/health:
get:
parameters:
- in: query
name: name
required: true
schema:
type: string
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
responses:
'200':
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/EvalTask'
- type: 'null'
description: OK
tags:
- EvalTasks
/eval_tasks/list:
get:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
responses:
'200':
content:
application/jsonl:
schema:
$ref: '#/components/schemas/EvalTask'
description: OK
tags:
- EvalTasks
/eval_tasks/register:
post:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/RegisterEvalTaskRequest'
required: true
responses:
'200':
description: OK
tags:
- EvalTasks
/health:
get: get:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4005,7 +4005,7 @@ paths:
description: OK description: OK
tags: tags:
- Inspect - Inspect
/inference/chat_completion: /alpha/inference/chat-completion:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4032,7 +4032,7 @@ paths:
description: Chat completion response. **OR** SSE-stream of these events. description: Chat completion response. **OR** SSE-stream of these events.
tags: tags:
- Inference - Inference
/inference/completion: /alpha/inference/completion:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4059,7 +4059,7 @@ paths:
description: Completion response. **OR** streamed completion response. description: Completion response. **OR** streamed completion response.
tags: tags:
- Inference - Inference
/inference/embeddings: /alpha/inference/embeddings:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4084,53 +4084,7 @@ paths:
description: OK description: OK
tags: tags:
- Inference - Inference
/memory/insert: /alpha/memory-banks/get:
post:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/InsertDocumentsRequest'
required: true
responses:
'200':
description: OK
tags:
- Memory
/memory/query:
post:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/QueryDocumentsRequest'
required: true
responses:
'200':
content:
application/json:
schema:
$ref: '#/components/schemas/QueryDocumentsResponse'
description: OK
tags:
- Memory
/memory_banks/get:
get: get:
parameters: parameters:
- in: query - in: query
@ -4160,7 +4114,7 @@ paths:
description: OK description: OK
tags: tags:
- MemoryBanks - MemoryBanks
/memory_banks/list: /alpha/memory-banks/list:
get: get:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4183,7 +4137,7 @@ paths:
description: OK description: OK
tags: tags:
- MemoryBanks - MemoryBanks
/memory_banks/register: /alpha/memory-banks/register:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4202,7 +4156,7 @@ paths:
responses: {} responses: {}
tags: tags:
- MemoryBanks - MemoryBanks
/memory_banks/unregister: /alpha/memory-banks/unregister:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4223,7 +4177,53 @@ paths:
description: OK description: OK
tags: tags:
- MemoryBanks - MemoryBanks
/models/get: /alpha/memory/insert:
post:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/InsertDocumentsRequest'
required: true
responses:
'200':
description: OK
tags:
- Memory
/alpha/memory/query:
post:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/QueryDocumentsRequest'
required: true
responses:
'200':
content:
application/json:
schema:
$ref: '#/components/schemas/QueryDocumentsResponse'
description: OK
tags:
- Memory
/alpha/models/get:
get: get:
parameters: parameters:
- in: query - in: query
@ -4249,7 +4249,7 @@ paths:
description: OK description: OK
tags: tags:
- Models - Models
/models/list: /alpha/models/list:
get: get:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4268,7 +4268,7 @@ paths:
description: OK description: OK
tags: tags:
- Models - Models
/models/register: /alpha/models/register:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4293,7 +4293,7 @@ paths:
description: OK description: OK
tags: tags:
- Models - Models
/models/unregister: /alpha/models/unregister:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4314,7 +4314,7 @@ paths:
description: OK description: OK
tags: tags:
- Models - Models
/post_training/job/artifacts: /alpha/post-training/job/artifacts:
get: get:
parameters: parameters:
- in: query - in: query
@ -4338,7 +4338,7 @@ paths:
description: OK description: OK
tags: tags:
- PostTraining - PostTraining
/post_training/job/cancel: /alpha/post-training/job/cancel:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4359,7 +4359,7 @@ paths:
description: OK description: OK
tags: tags:
- PostTraining - PostTraining
/post_training/job/logs: /alpha/post-training/job/logs:
get: get:
parameters: parameters:
- in: query - in: query
@ -4383,7 +4383,7 @@ paths:
description: OK description: OK
tags: tags:
- PostTraining - PostTraining
/post_training/job/status: /alpha/post-training/job/status:
get: get:
parameters: parameters:
- in: query - in: query
@ -4407,7 +4407,7 @@ paths:
description: OK description: OK
tags: tags:
- PostTraining - PostTraining
/post_training/jobs: /alpha/post-training/jobs:
get: get:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4426,7 +4426,7 @@ paths:
description: OK description: OK
tags: tags:
- PostTraining - PostTraining
/post_training/preference_optimize: /alpha/post-training/preference-optimize:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4451,7 +4451,7 @@ paths:
description: OK description: OK
tags: tags:
- PostTraining - PostTraining
/post_training/supervised_fine_tune: /alpha/post-training/supervised-fine-tune:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4476,7 +4476,7 @@ paths:
description: OK description: OK
tags: tags:
- PostTraining - PostTraining
/providers/list: /alpha/providers/list:
get: get:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4497,7 +4497,7 @@ paths:
description: OK description: OK
tags: tags:
- Inspect - Inspect
/routes/list: /alpha/routes/list:
get: get:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4520,7 +4520,7 @@ paths:
description: OK description: OK
tags: tags:
- Inspect - Inspect
/safety/run_shield: /alpha/safety/run-shield:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4545,57 +4545,7 @@ paths:
description: OK description: OK
tags: tags:
- Safety - Safety
/scoring/score: /alpha/scoring-functions/get:
post:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/ScoreRequest'
required: true
responses:
'200':
content:
application/json:
schema:
$ref: '#/components/schemas/ScoreResponse'
description: OK
tags:
- Scoring
/scoring/score_batch:
post:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/ScoreBatchRequest'
required: true
responses:
'200':
content:
application/json:
schema:
$ref: '#/components/schemas/ScoreBatchResponse'
description: OK
tags:
- Scoring
/scoring_functions/get:
get: get:
parameters: parameters:
- in: query - in: query
@ -4621,7 +4571,7 @@ paths:
description: OK description: OK
tags: tags:
- ScoringFunctions - ScoringFunctions
/scoring_functions/list: /alpha/scoring-functions/list:
get: get:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4640,7 +4590,7 @@ paths:
description: OK description: OK
tags: tags:
- ScoringFunctions - ScoringFunctions
/scoring_functions/register: /alpha/scoring-functions/register:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4661,7 +4611,57 @@ paths:
description: OK description: OK
tags: tags:
- ScoringFunctions - ScoringFunctions
/shields/get: /alpha/scoring/score:
post:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/ScoreRequest'
required: true
responses:
'200':
content:
application/json:
schema:
$ref: '#/components/schemas/ScoreResponse'
description: OK
tags:
- Scoring
/alpha/scoring/score-batch:
post:
parameters:
- description: JSON-encoded provider data which will be made available to the
adapter servicing the API
in: header
name: X-LlamaStack-ProviderData
required: false
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/ScoreBatchRequest'
required: true
responses:
'200':
content:
application/json:
schema:
$ref: '#/components/schemas/ScoreBatchResponse'
description: OK
tags:
- Scoring
/alpha/shields/get:
get: get:
parameters: parameters:
- in: query - in: query
@ -4687,7 +4687,7 @@ paths:
description: OK description: OK
tags: tags:
- Shields - Shields
/shields/list: /alpha/shields/list:
get: get:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4706,7 +4706,7 @@ paths:
description: OK description: OK
tags: tags:
- Shields - Shields
/shields/register: /alpha/shields/register:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4731,7 +4731,7 @@ paths:
description: OK description: OK
tags: tags:
- Shields - Shields
/synthetic_data_generation/generate: /alpha/synthetic-data-generation/generate:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the
@ -4756,7 +4756,7 @@ paths:
description: OK description: OK
tags: tags:
- SyntheticDataGeneration - SyntheticDataGeneration
/telemetry/get_trace: /alpha/telemetry/get-trace:
get: get:
parameters: parameters:
- in: query - in: query
@ -4780,7 +4780,7 @@ paths:
description: OK description: OK
tags: tags:
- Telemetry - Telemetry
/telemetry/log_event: /alpha/telemetry/log-event:
post: post:
parameters: parameters:
- description: JSON-encoded provider data which will be made available to the - description: JSON-encoded provider data which will be made available to the

View file

@ -2,63 +2,67 @@
The `llamastack/distribution-fireworks` distribution consists of the following provider configurations. The `llamastack/distribution-fireworks` distribution consists of the following provider configurations.
| API | Provider(s) |
|-----|-------------|
| agents | `inline::meta-reference` |
| inference | `remote::fireworks` |
| memory | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
| safety | `inline::llama-guard` |
| telemetry | `inline::meta-reference` |
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
| **Provider(s)** | remote::fireworks | meta-reference | meta-reference | meta-reference | meta-reference |
### Step 0. Prerequisite ### Environment Variables
- Make sure you have access to a fireworks API Key. You can get one by visiting [fireworks.ai](https://fireworks.ai/)
### Step 1. Start the Distribution (Single Node CPU) The following environment variables can be configured:
#### (Option 1) Start Distribution Via Docker - `LLAMASTACK_PORT`: Port for the Llama Stack distribution server (default: `5001`)
> [!NOTE] - `FIREWORKS_API_KEY`: Fireworks.AI API Key (default: ``)
> This assumes you have an hosted endpoint at Fireworks with API Key.
``` ### Models
$ cd distributions/fireworks && docker compose up
The following models are available by default:
- `meta-llama/Llama-3.1-8B-Instruct (fireworks/llama-v3p1-8b-instruct)`
- `meta-llama/Llama-3.1-70B-Instruct (fireworks/llama-v3p1-70b-instruct)`
- `meta-llama/Llama-3.1-405B-Instruct-FP8 (fireworks/llama-v3p1-405b-instruct)`
- `meta-llama/Llama-3.2-3B-Instruct (fireworks/llama-v3p2-1b-instruct)`
- `meta-llama/Llama-3.2-11B-Vision-Instruct (fireworks/llama-v3p2-3b-instruct)`
- `meta-llama/Llama-3.2-11B-Vision-Instruct (fireworks/llama-v3p2-11b-vision-instruct)`
- `meta-llama/Llama-3.2-90B-Vision-Instruct (fireworks/llama-v3p2-90b-vision-instruct)`
- `meta-llama/Llama-Guard-3-8B (fireworks/llama-guard-3-8b)`
- `meta-llama/Llama-Guard-3-11B-Vision (fireworks/llama-guard-3-11b-vision)`
### Prerequisite: API Keys
Make sure you have access to a Fireworks API Key. You can get one by visiting [fireworks.ai](https://fireworks.ai/).
## Running Llama Stack with Fireworks
You can do this via Conda (build code) or Docker which has a pre-built image.
### Via Docker
This method allows you to get started quickly without having to build the distribution code.
```bash
LLAMA_STACK_PORT=5001
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run.yaml:/root/my-run.yaml \
llamastack/distribution-fireworks \
--yaml-config /root/my-run.yaml \
--port $LLAMA_STACK_PORT \
--env FIREWORKS_API_KEY=$FIREWORKS_API_KEY
``` ```
Make sure in you `run.yaml` file, you inference provider is pointing to the correct Fireworks URL server endpoint. E.g. ### Via Conda
```
inference:
- provider_id: fireworks
provider_type: remote::fireworks
config:
url: https://api.fireworks.ai/inference
api_key: <optional api key>
```
#### (Option 2) Start Distribution Via Conda
```bash ```bash
llama stack build --template fireworks --image-type conda llama stack build --template fireworks --image-type conda
# -- modify run.yaml to a valid Fireworks server endpoint llama stack run ./run.yaml \
llama stack run ./run.yaml --port 5001 \
``` --env FIREWORKS_API_KEY=$FIREWORKS_API_KEY
### (Optional) Model Serving
Use `llama-stack-client models list` to check the available models served by Fireworks.
```
$ llama-stack-client models list
+------------------------------+------------------------------+---------------+------------+
| identifier | llama_model | provider_id | metadata |
+==============================+==============================+===============+============+
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | fireworks0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.1-70B-Instruct | Llama3.1-70B-Instruct | fireworks0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.1-405B-Instruct | Llama3.1-405B-Instruct | fireworks0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-1B-Instruct | Llama3.2-1B-Instruct | fireworks0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-3B-Instruct | Llama3.2-3B-Instruct | fireworks0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-11B-Vision-Instruct | Llama3.2-11B-Vision-Instruct | fireworks0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-90B-Vision-Instruct | Llama3.2-90B-Vision-Instruct | fireworks0 | {} |
+------------------------------+------------------------------+---------------+------------+
``` ```

View file

@ -1,15 +1,32 @@
# Meta Reference Distribution # Meta Reference Distribution
The `llamastack/distribution-meta-reference-gpu` distribution consists of the following provider configurations. The `llamastack/distribution-meta-reference-gpu` distribution consists of the following provider configurations:
| API | Provider(s) |
|-----|-------------|
| agents | `inline::meta-reference` |
| inference | `inline::meta-reference` |
| memory | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
| safety | `inline::llama-guard` |
| telemetry | `inline::meta-reference` |
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | Note that you need access to nvidia GPUs to run this distribution. This distribution is not compatible with CPU-only machines or machines with AMD GPUs.
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
| **Provider(s)** | meta-reference | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference | ### Environment Variables
The following environment variables can be configured:
- `LLAMASTACK_PORT`: Port for the Llama Stack distribution server (default: `5001`)
- `INFERENCE_MODEL`: Inference model loaded into the Meta Reference server (default: `meta-llama/Llama-3.2-3B-Instruct`)
- `INFERENCE_CHECKPOINT_DIR`: Directory containing the Meta Reference model checkpoint (default: `null`)
- `SAFETY_MODEL`: Name of the safety (Llama-Guard) model to use (default: `meta-llama/Llama-Guard-3-1B`)
- `SAFETY_CHECKPOINT_DIR`: Directory containing the Llama-Guard model checkpoint (default: `null`)
### Step 0. Prerequisite - Downloading Models ## Prerequisite: Downloading Models
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models.
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
``` ```
$ ls ~/.llama/checkpoints $ ls ~/.llama/checkpoints
@ -17,55 +34,56 @@ Llama3.1-8B Llama3.2-11B-Vision-Instruct Llama3.2-1B-Instruct Llama3
Llama3.1-8B-Instruct Llama3.2-1B Llama3.2-3B-Instruct Llama-Guard-3-1B Prompt-Guard-86M Llama3.1-8B-Instruct Llama3.2-1B Llama3.2-3B-Instruct Llama-Guard-3-1B Prompt-Guard-86M
``` ```
### Step 1. Start the Distribution ## Running the Distribution
#### (Option 1) Start with Docker You can do this via Conda (build code) or Docker which has a pre-built image.
```
$ cd distributions/meta-reference-gpu && docker compose up ### Via Docker
This method allows you to get started quickly without having to build the distribution code.
```bash
LLAMA_STACK_PORT=5001
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run.yaml:/root/my-run.yaml \
llamastack/distribution-meta-reference-gpu \
/root/my-run.yaml \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
``` ```
> [!NOTE] If you are using Llama Stack Safety / Shield APIs, use:
> This assumes you have access to GPU to start a local server with access to your GPU.
```bash
> [!NOTE] docker run \
> `~/.llama` should be the path containing downloaded weights of Llama models. -it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run-with-safety.yaml:/root/my-run.yaml \
This will download and start running a pre-built docker container. Alternatively, you may use the following commands: llamastack/distribution-meta-reference-gpu \
/root/my-run.yaml \
``` --port $LLAMA_STACK_PORT \
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-gpu --yaml_config /root/my-run.yaml --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
--env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
``` ```
#### (Option 2) Start with Conda ### Via Conda
1. Install the `llama` CLI. See [CLI Reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html) Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
2. Build the `meta-reference-gpu` distribution ```bash
llama stack build --template meta-reference-gpu --image-type conda
``` llama stack run ./run.yaml \
$ llama stack build --template meta-reference-gpu --image-type conda --port 5001 \
--env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
``` ```
3. Start running distribution If you are using Llama Stack Safety / Shield APIs, use:
```
$ cd distributions/meta-reference-gpu
$ llama stack run ./run.yaml
```
### (Optional) Serving a new model ```bash
You may change the `config.model` in `run.yaml` to update the model currently being served by the distribution. Make sure you have the model checkpoint downloaded in your `~/.llama`. llama stack run ./run-with-safety.yaml \
--port 5001 \
--env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
--env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
``` ```
inference:
- provider_id: meta0
provider_type: inline::meta-reference
config:
model: Llama3.2-11B-Vision-Instruct
quantization: null
torch_seed: null
max_seq_len: 4096
max_batch_size: 1
```
Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.

View file

@ -2,103 +2,114 @@
The `llamastack/distribution-ollama` distribution consists of the following provider configurations. The `llamastack/distribution-ollama` distribution consists of the following provider configurations.
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | | API | Provider(s) |
|----------------- |---------------- |---------------- |------------------------------------ |---------------- |---------------- | |-----|-------------|
| **Provider(s)** | remote::ollama | meta-reference | remote::pgvector, remote::chromadb | meta-reference | meta-reference | | agents | `inline::meta-reference` |
| inference | `remote::ollama` |
| memory | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
| safety | `inline::llama-guard` |
| telemetry | `inline::meta-reference` |
## Using Docker Compose You should use this distribution if you have a regular desktop machine without very powerful GPUs. Of course, if you have powerful GPUs, you can still continue using this distribution since Ollama supports GPU acceleration.### Environment Variables
You can use `docker compose` to start a Ollama server and connect with Llama Stack server in a single command. The following environment variables can be configured:
### Docker: Start the Distribution (Single Node regular Desktop machine) - `LLAMASTACK_PORT`: Port for the Llama Stack distribution server (default: `5001`)
- `OLLAMA_URL`: URL of the Ollama server (default: `http://127.0.0.1:11434`)
- `INFERENCE_MODEL`: Inference model loaded into the Ollama server (default: `meta-llama/Llama-3.2-3B-Instruct`)
- `SAFETY_MODEL`: Safety model loaded into the Ollama server (default: `meta-llama/Llama-Guard-3-1B`)
> [!NOTE]
> This will start an ollama server with CPU only, please see [Ollama Documentations](https://github.com/ollama/ollama) for serving models on CPU only. ## Setting up Ollama server
Please check the [Ollama Documentation](https://github.com/ollama/ollama) on how to install and run Ollama. After installing Ollama, you need to run `ollama serve` to start the server.
In order to load models, you can run:
```bash ```bash
$ cd distributions/ollama; docker compose up export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
# ollama names this model differently, and we must use the ollama name when loading the model
export OLLAMA_INFERENCE_MODEL="llama3.2:3b-instruct-fp16"
ollama run $OLLAMA_INFERENCE_MODEL --keepalive 60m
``` ```
### Docker: Start a Distribution (Single Node with nvidia GPUs) If you are using Llama Stack Safety / Shield APIs, you will also need to pull and run the safety model.
> [!NOTE]
> This assumes you have access to GPU to start a Ollama server with access to your GPU.
```bash ```bash
$ cd distributions/ollama-gpu; docker compose up export SAFETY_MODEL="meta-llama/Llama-Guard-3-1B"
# ollama names this model differently, and we must use the ollama name when loading the model
export OLLAMA_SAFETY_MODEL="llama-guard3:1b"
ollama run $OLLAMA_SAFETY_MODEL --keepalive 60m
``` ```
You will see outputs similar to following --- ## Running Llama Stack
Now you are ready to run Llama Stack with Ollama as the inference provider. You can do this via Conda (build code) or Docker which has a pre-built image.
### Via Docker
This method allows you to get started quickly without having to build the distribution code.
```bash ```bash
[ollama] | [GIN] 2024/10/18 - 21:19:41 | 200 | 226.841µs | ::1 | GET "/api/ps" export LLAMA_STACK_PORT=5001
[ollama] | [GIN] 2024/10/18 - 21:19:42 | 200 | 60.908µs | ::1 | GET "/api/ps" docker run \
INFO: Started server process [1] -it \
INFO: Waiting for application startup. -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
INFO: Application startup complete. -v ~/.llama:/root/.llama \
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit) -v ./run.yaml:/root/my-run.yaml \
[llamastack] | Resolved 12 providers llamastack/distribution-ollama \
[llamastack] | inner-inference => ollama0 --yaml-config /root/my-run.yaml \
[llamastack] | models => __routing_table__ --port $LLAMA_STACK_PORT \
[llamastack] | inference => __autorouted__ --env INFERENCE_MODEL=$INFERENCE_MODEL \
--env OLLAMA_URL=http://host.docker.internal:11434
``` ```
To kill the server If you are using Llama Stack Safety / Shield APIs, use:
```bash ```bash
docker compose down docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ~/.llama:/root/.llama \
-v ./run-with-safety.yaml:/root/my-run.yaml \
llamastack/distribution-ollama \
--yaml-config /root/my-run.yaml \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env SAFETY_MODEL=$SAFETY_MODEL \
--env OLLAMA_URL=http://host.docker.internal:11434
``` ```
## Starting Ollama and Llama Stack separately ### Via Conda
If you wish to separately spin up a Ollama server, and connect with Llama Stack, you should use the following commands. Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
#### Start Ollama server
- Please check the [Ollama Documentation](https://github.com/ollama/ollama) for more details.
**Via Docker**
```bash
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
```
**Via CLI**
```bash
ollama run <model_id>
```
#### Start Llama Stack server pointing to Ollama server
**Via Conda**
```bash ```bash
export LLAMA_STACK_PORT=5001
llama stack build --template ollama --image-type conda llama stack build --template ollama --image-type conda
llama stack run ./gpu/run.yaml llama stack run ./run.yaml \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env OLLAMA_URL=http://localhost:11434
``` ```
**Via Docker** If you are using Llama Stack Safety / Shield APIs, use:
```
docker run --network host -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./gpu/run.yaml:/root/llamastack-run-ollama.yaml --gpus=all llamastack/distribution-ollama --yaml_config /root/llamastack-run-ollama.yaml ```bash
llama stack run ./run-with-safety.yaml \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env SAFETY_MODEL=$SAFETY_MODEL \
--env OLLAMA_URL=http://localhost:11434
``` ```
Make sure in your `run.yaml` file, your inference provider is pointing to the correct Ollama endpoint. E.g.
```yaml
inference:
- provider_id: ollama0
provider_type: remote::ollama
config:
url: http://127.0.0.1:14343
```
### (Optional) Update Model Serving Configuration ### (Optional) Update Model Serving Configuration
#### Downloading model via Ollama
You can use ollama for managing model downloads.
```bash
ollama pull llama3.1:8b-instruct-fp16
ollama pull llama3.1:70b-instruct-fp16
```
> [!NOTE] > [!NOTE]
> Please check the [OLLAMA_SUPPORTED_MODELS](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers.remote/inference/ollama/ollama.py) for the supported Ollama models. > Please check the [OLLAMA_SUPPORTED_MODELS](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers.remote/inference/ollama/ollama.py) for the supported Ollama models.

View file

@ -0,0 +1,144 @@
# Remote vLLM Distribution
The `llamastack/distribution-remote-vllm` distribution consists of the following provider configurations:
| API | Provider(s) |
|-----|-------------|
| agents | `inline::meta-reference` |
| inference | `remote::vllm` |
| memory | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
| safety | `inline::llama-guard` |
| telemetry | `inline::meta-reference` |
You can use this distribution if you have GPUs and want to run an independent vLLM server container for running inference.
### Environment Variables
The following environment variables can be configured:
- `LLAMASTACK_PORT`: Port for the Llama Stack distribution server (default: `5001`)
- `INFERENCE_MODEL`: Inference model loaded into the vLLM server (default: `meta-llama/Llama-3.2-3B-Instruct`)
- `VLLM_URL`: URL of the vLLM server with the main inference model (default: `http://host.docker.internal:5100}/v1`)
- `MAX_TOKENS`: Maximum number of tokens for generation (default: `4096`)
- `SAFETY_VLLM_URL`: URL of the vLLM server with the safety model (default: `http://host.docker.internal:5101/v1`)
- `SAFETY_MODEL`: Name of the safety (Llama-Guard) model to use (default: `meta-llama/Llama-Guard-3-1B`)
## Setting up vLLM server
Please check the [vLLM Documentation](https://docs.vllm.ai/en/v0.5.5/serving/deploying_with_docker.html) to get a vLLM endpoint. Here is a sample script to start a vLLM server locally via Docker:
```bash
export INFERENCE_PORT=8000
export INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
export CUDA_VISIBLE_DEVICES=0
docker run \
--runtime nvidia \
--gpus $CUDA_VISIBLE_DEVICES \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
-p $INFERENCE_PORT:$INFERENCE_PORT \
--ipc=host \
vllm/vllm-openai:latest \
--gpu-memory-utilization 0.7 \
--model $INFERENCE_MODEL \
--port $INFERENCE_PORT
```
If you are using Llama Stack Safety / Shield APIs, then you will need to also run another instance of a vLLM with a corresponding safety model like `meta-llama/Llama-Guard-3-1B` using a script like:
```bash
export SAFETY_PORT=8081
export SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
export CUDA_VISIBLE_DEVICES=1
docker run \
--runtime nvidia \
--gpus $CUDA_VISIBLE_DEVICES \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
-p $SAFETY_PORT:$SAFETY_PORT \
--ipc=host \
vllm/vllm-openai:latest \
--gpu-memory-utilization 0.7 \
--model $SAFETY_MODEL \
--port $SAFETY_PORT
```
## Running Llama Stack
Now you are ready to run Llama Stack with vLLM as the inference provider. You can do this via Conda (build code) or Docker which has a pre-built image.
### Via Docker
This method allows you to get started quickly without having to build the distribution code.
```bash
export INFERENCE_PORT=8000
export INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
export LLAMA_STACK_PORT=5001
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run.yaml:/root/my-run.yaml \
llamastack/distribution-remote-vllm \
--yaml-config /root/my-run.yaml \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env VLLM_URL=http://host.docker.internal:$INFERENCE_PORT/v1
```
If you are using Llama Stack Safety / Shield APIs, use:
```bash
export SAFETY_PORT=8081
export SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run-with-safety.yaml:/root/my-run.yaml \
llamastack/distribution-remote-vllm \
--yaml-config /root/my-run.yaml \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env VLLM_URL=http://host.docker.internal:$INFERENCE_PORT/v1 \
--env SAFETY_MODEL=$SAFETY_MODEL \
--env SAFETY_VLLM_URL=http://host.docker.internal:$SAFETY_PORT/v1
```
### Via Conda
Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
```bash
export INFERENCE_PORT=8000
export INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
export LLAMA_STACK_PORT=5001
cd distributions/remote-vllm
llama stack build --template remote-vllm --image-type conda
llama stack run ./run.yaml \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env VLLM_URL=http://localhost:$INFERENCE_PORT/v1
```
If you are using Llama Stack Safety / Shield APIs, use:
```bash
export SAFETY_PORT=8081
export SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
llama stack run ./run-with-safety.yaml \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env VLLM_URL=http://localhost:$INFERENCE_PORT/v1 \
--env SAFETY_MODEL=$SAFETY_MODEL \
--env SAFETY_VLLM_URL=http://localhost:$SAFETY_PORT/v1
```

View file

@ -1,83 +0,0 @@
# Remote vLLM Distribution
The `llamastack/distribution-remote-vllm` distribution consists of the following provider configurations.
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|----------------- |---------------- |---------------- |------------------------------------ |---------------- |---------------- |
| **Provider(s)** | remote::vllm | meta-reference | remote::pgvector, remote::chromadb | meta-reference | meta-reference |
You can use this distribution if you have GPUs and want to run an independent vLLM server container for running inference.
## Using Docker Compose
You can use `docker compose` to start a vLLM container and Llama Stack server container together.
> [!NOTE]
> This assumes you have access to GPU to start a vLLM server with access to your GPU.
```bash
$ cd distributions/remote-vllm; docker compose up
```
You will see outputs similar to following ---
```
<TO BE FILLED>
```
To kill the server
```bash
docker compose down
```
## Starting vLLM and Llama Stack separately
You may want to start a vLLM server and connect with Llama Stack manually. There are two ways to start a vLLM server and connect with Llama Stack.
#### Start vLLM server.
```bash
docker run --runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model meta-llama/Llama-3.1-8B-Instruct
```
Please check the [vLLM Documentation](https://docs.vllm.ai/en/v0.5.5/serving/deploying_with_docker.html) for more details.
#### Start Llama Stack server pointing to your vLLM server
We have provided a template `run.yaml` file in the `distributions/remote-vllm` directory. Please make sure to modify the `inference.provider_id` to point to your vLLM server endpoint. As an example, if your vLLM server is running on `http://127.0.0.1:8000`, your `run.yaml` file should look like the following:
```yaml
inference:
- provider_id: vllm0
provider_type: remote::vllm
config:
url: http://127.0.0.1:8000
```
**Via Conda**
If you are using Conda, you can build and run the Llama Stack server with the following commands:
```bash
cd distributions/remote-vllm
llama stack build --template remote_vllm --image-type conda
llama stack run run.yaml
```
**Via Docker**
You can use the Llama Stack Docker image to start the server with the following command:
```bash
docker run --network host -it -p 5000:5000 \
-v ~/.llama:/root/.llama \
-v ./gpu/run.yaml:/root/llamastack-run-remote-vllm.yaml \
--gpus=all \
llamastack/distribution-remote-vllm \
--yaml_config /root/llamastack-run-remote-vllm.yaml
```

View file

@ -2,94 +2,125 @@
The `llamastack/distribution-tgi` distribution consists of the following provider configurations. The `llamastack/distribution-tgi` distribution consists of the following provider configurations.
| API | Provider(s) |
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | |-----|-------------|
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- | | agents | `inline::meta-reference` |
| **Provider(s)** | remote::tgi | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference | | inference | `remote::tgi` |
| memory | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
| safety | `inline::llama-guard` |
| telemetry | `inline::meta-reference` |
### Docker: Start the Distribution (Single Node GPU) You can use this distribution if you have GPUs and want to run an independent TGI server container for running inference.
> [!NOTE] ### Environment Variables
> This assumes you have access to GPU to start a TGI server with access to your GPU.
The following environment variables can be configured:
- `LLAMASTACK_PORT`: Port for the Llama Stack distribution server (default: `5001`)
- `INFERENCE_MODEL`: Inference model loaded into the TGI server (default: `meta-llama/Llama-3.2-3B-Instruct`)
- `TGI_URL`: URL of the TGI server with the main inference model (default: `http://127.0.0.1:8080}/v1`)
- `TGI_SAFETY_URL`: URL of the TGI server with the safety model (default: `http://127.0.0.1:8081/v1`)
- `SAFETY_MODEL`: Name of the safety (Llama-Guard) model to use (default: `meta-llama/Llama-Guard-3-1B`)
``` ## Setting up TGI server
$ cd distributions/tgi && docker compose up
Please check the [TGI Getting Started Guide](https://github.com/huggingface/text-generation-inference?tab=readme-ov-file#get-started) to get a TGI endpoint. Here is a sample script to start a TGI server locally via Docker:
```bash
export INFERENCE_PORT=8080
export INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
export CUDA_VISIBLE_DEVICES=0
docker run --rm -it \
-v $HOME/.cache/huggingface:/data \
-p $INFERENCE_PORT:$INFERENCE_PORT \
--gpus $CUDA_VISIBLE_DEVICES \
ghcr.io/huggingface/text-generation-inference:2.3.1 \
--dtype bfloat16 \
--usage-stats off \
--sharded false \
--cuda-memory-fraction 0.7 \
--model-id $INFERENCE_MODEL \
--port $INFERENCE_PORT
``` ```
The script will first start up TGI server, then start up Llama Stack distribution server hooking up to the remote TGI provider for inference. You should be able to see the following outputs -- If you are using Llama Stack Safety / Shield APIs, then you will need to also run another instance of a TGI with a corresponding safety model like `meta-llama/Llama-Guard-3-1B` using a script like:
```
[text-generation-inference] | 2024-10-15T18:56:33.810397Z INFO text_generation_router::server: router/src/server.rs:1813: Using config Some(Llama) ```bash
[text-generation-inference] | 2024-10-15T18:56:33.810448Z WARN text_generation_router::server: router/src/server.rs:1960: Invalid hostname, defaulting to 0.0.0.0 export SAFETY_PORT=8081
[text-generation-inference] | 2024-10-15T18:56:33.864143Z INFO text_generation_router::server: router/src/server.rs:2353: Connected export SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
INFO: Started server process [1] export CUDA_VISIBLE_DEVICES=1
INFO: Waiting for application startup.
INFO: Application startup complete. docker run --rm -it \
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit) -v $HOME/.cache/huggingface:/data \
-p $SAFETY_PORT:$SAFETY_PORT \
--gpus $CUDA_VISIBLE_DEVICES \
ghcr.io/huggingface/text-generation-inference:2.3.1 \
--dtype bfloat16 \
--usage-stats off \
--sharded false \
--model-id $SAFETY_MODEL \
--port $SAFETY_PORT
``` ```
To kill the server ## Running Llama Stack
```
docker compose down Now you are ready to run Llama Stack with TGI as the inference provider. You can do this via Conda (build code) or Docker which has a pre-built image.
### Via Docker
This method allows you to get started quickly without having to build the distribution code.
```bash
LLAMA_STACK_PORT=5001
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run.yaml:/root/my-run.yaml \
llamastack/distribution-tgi \
--yaml-config /root/my-run.yaml \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env TGI_URL=http://host.docker.internal:$INFERENCE_PORT
``` ```
If you are using Llama Stack Safety / Shield APIs, use:
### Conda: TGI server + llama stack run ```bash
docker run \
If you wish to separately spin up a TGI server, and connect with Llama Stack, you may use the following commands. -it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
#### Start TGI server locally -v ./run-with-safety.yaml:/root/my-run.yaml \
- Please check the [TGI Getting Started Guide](https://github.com/huggingface/text-generation-inference?tab=readme-ov-file#get-started) to get a TGI endpoint. llamastack/distribution-tgi \
--yaml-config /root/my-run.yaml \
``` --port $LLAMA_STACK_PORT \
docker run --rm -it -v $HOME/.cache/huggingface:/data -p 5009:5009 --gpus all ghcr.io/huggingface/text-generation-inference:latest --dtype bfloat16 --usage-stats on --sharded false --model-id meta-llama/Llama-3.1-8B-Instruct --port 5009 --env INFERENCE_MODEL=$INFERENCE_MODEL \
--env TGI_URL=http://host.docker.internal:$INFERENCE_PORT \
--env SAFETY_MODEL=$SAFETY_MODEL \
--env TGI_SAFETY_URL=http://host.docker.internal:$SAFETY_PORT
``` ```
#### Start Llama Stack server pointing to TGI server ### Via Conda
**Via Conda** Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
```bash ```bash
llama stack build --template tgi --image-type conda llama stack build --template tgi --image-type conda
# -- start a TGI server endpoint llama stack run ./run.yaml
llama stack run ./gpu/run.yaml --port 5001
--env INFERENCE_MODEL=$INFERENCE_MODEL
--env TGI_URL=http://127.0.0.1:$INFERENCE_PORT
``` ```
**Via Docker** If you are using Llama Stack Safety / Shield APIs, use:
```
docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-tgi --yaml_config /root/my-run.yaml
```
Make sure in you `run.yaml` file, you inference provider is pointing to the correct TGI server endpoint. E.g. ```bash
``` llama stack run ./run-with-safety.yaml
inference: --port 5001
- provider_id: tgi0 --env INFERENCE_MODEL=$INFERENCE_MODEL
provider_type: remote::tgi --env TGI_URL=http://127.0.0.1:$INFERENCE_PORT
config: --env SAFETY_MODEL=$SAFETY_MODEL
url: http://127.0.0.1:5009 --env TGI_SAFETY_URL=http://127.0.0.1:$SAFETY_PORT
```
### (Optional) Update Model Serving Configuration
To serve a new model with `tgi`, change the docker command flag `--model-id <model-to-serve>`.
This can be done by edit the `command` args in `compose.yaml`. E.g. Replace "Llama-3.2-1B-Instruct" with the model you want to serve.
```
command: ["--dtype", "bfloat16", "--usage-stats", "on", "--sharded", "false", "--model-id", "meta-llama/Llama-3.2-1B-Instruct", "--port", "5009", "--cuda-memory-fraction", "0.3"]
```
or by changing the docker run command's `--model-id` flag
```
docker run --rm -it -v $HOME/.cache/huggingface:/data -p 5009:5009 --gpus all ghcr.io/huggingface/text-generation-inference:latest --dtype bfloat16 --usage-stats on --sharded false --model-id meta-llama/Llama-3.2-1B-Instruct --port 5009
```
In `run.yaml`, make sure you point the correct server endpoint to the TGI server endpoint serving your model.
```
inference:
- provider_id: tgi0
provider_type: remote::tgi
config:
url: http://127.0.0.1:5009
``` ```

View file

@ -1,62 +1,67 @@
# Together Distribution # Fireworks Distribution
### Connect to a Llama Stack Together Endpoint
- You may connect to a hosted endpoint `https://llama-stack.together.ai`, serving a Llama Stack distribution
The `llamastack/distribution-together` distribution consists of the following provider configurations. The `llamastack/distribution-together` distribution consists of the following provider configurations.
| API | Provider(s) |
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | |-----|-------------|
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- | | agents | `inline::meta-reference` |
| **Provider(s)** | remote::together | meta-reference | meta-reference, remote::weaviate | meta-reference | meta-reference | | inference | `remote::together` |
| memory | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
| safety | `inline::llama-guard` |
| telemetry | `inline::meta-reference` |
### Docker: Start the Distribution (Single Node CPU) ### Environment Variables
> [!NOTE] The following environment variables can be configured:
> This assumes you have an hosted endpoint at Together with API Key.
``` - `LLAMASTACK_PORT`: Port for the Llama Stack distribution server (default: `5001`)
$ cd distributions/together && docker compose up - `TOGETHER_API_KEY`: Together.AI API Key (default: ``)
### Models
The following models are available by default:
- `meta-llama/Llama-3.1-8B-Instruct`
- `meta-llama/Llama-3.1-70B-Instruct`
- `meta-llama/Llama-3.1-405B-Instruct-FP8`
- `meta-llama/Llama-3.2-3B-Instruct`
- `meta-llama/Llama-3.2-11B-Vision-Instruct`
- `meta-llama/Llama-3.2-90B-Vision-Instruct`
- `meta-llama/Llama-Guard-3-8B`
- `meta-llama/Llama-Guard-3-11B-Vision`
### Prerequisite: API Keys
Make sure you have access to a Together API Key. You can get one by visiting [together.xyz](https://together.xyz/).
## Running Llama Stack with Together
You can do this via Conda (build code) or Docker which has a pre-built image.
### Via Docker
This method allows you to get started quickly without having to build the distribution code.
```bash
LLAMA_STACK_PORT=5001
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run.yaml:/root/my-run.yaml \
llamastack/distribution-together \
--yaml-config /root/my-run.yaml \
--port $LLAMA_STACK_PORT \
--env TOGETHER_API_KEY=$TOGETHER_API_KEY
``` ```
Make sure in your `run.yaml` file, your inference provider is pointing to the correct Together URL server endpoint. E.g. ### Via Conda
```
inference:
- provider_id: together
provider_type: remote::together
config:
url: https://api.together.xyz/v1
api_key: <optional api key>
```
### Conda llama stack run (Single Node CPU)
```bash ```bash
llama stack build --template together --image-type conda llama stack build --template together --image-type conda
# -- modify run.yaml to a valid Together server endpoint llama stack run ./run.yaml \
llama stack run ./run.yaml --port 5001 \
``` --env TOGETHER_API_KEY=$TOGETHER_API_KEY
### (Optional) Update Model Serving Configuration
Use `llama-stack-client models list` to check the available models served by together.
```
$ llama-stack-client models list
+------------------------------+------------------------------+---------------+------------+
| identifier | llama_model | provider_id | metadata |
+==============================+==============================+===============+============+
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | together0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.1-70B-Instruct | Llama3.1-70B-Instruct | together0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.1-405B-Instruct | Llama3.1-405B-Instruct | together0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-3B-Instruct | Llama3.2-3B-Instruct | together0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-11B-Vision-Instruct | Llama3.2-11B-Vision-Instruct | together0 | {} |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-90B-Vision-Instruct | Llama3.2-90B-Vision-Instruct | together0 | {} |
+------------------------------+------------------------------+---------------+------------+
``` ```

View file

@ -74,7 +74,7 @@ A Distribution is where APIs and Providers are assembled together to provide a c
| Python | [llama-stack-client-python](https://github.com/meta-llama/llama-stack-client-python) | [![PyPI version](https://img.shields.io/pypi/v/llama_stack_client.svg)](https://pypi.org/project/llama_stack_client/) | Python | [llama-stack-client-python](https://github.com/meta-llama/llama-stack-client-python) | [![PyPI version](https://img.shields.io/pypi/v/llama_stack_client.svg)](https://pypi.org/project/llama_stack_client/)
| Swift | [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift) | [![Swift Package Index](https://img.shields.io/endpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2Fmeta-llama%2Fllama-stack-client-swift%2Fbadge%3Ftype%3Dswift-versions)](https://swiftpackageindex.com/meta-llama/llama-stack-client-swift) | Swift | [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift) | [![Swift Package Index](https://img.shields.io/endpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2Fmeta-llama%2Fllama-stack-client-swift%2Fbadge%3Ftype%3Dswift-versions)](https://swiftpackageindex.com/meta-llama/llama-stack-client-swift)
| Node | [llama-stack-client-node](https://github.com/meta-llama/llama-stack-client-node) | [![NPM version](https://img.shields.io/npm/v/llama-stack-client.svg)](https://npmjs.org/package/llama-stack-client) | Node | [llama-stack-client-node](https://github.com/meta-llama/llama-stack-client-node) | [![NPM version](https://img.shields.io/npm/v/llama-stack-client.svg)](https://npmjs.org/package/llama-stack-client)
| Kotlin | [llama-stack-client-kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) | | Kotlin | [llama-stack-client-kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) | [![Maven version](https://img.shields.io/maven-central/v/com.llama.llamastack/llama-stack-client-kotlin)](https://central.sonatype.com/artifact/com.llama.llamastack/llama-stack-client-kotlin)
Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from [python](https://github.com/meta-llama/llama-stack-client-python), [node](https://github.com/meta-llama/llama-stack-client-node), [swift](https://github.com/meta-llama/llama-stack-client-swift), and [kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) programming languages to quickly build your applications. Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from [python](https://github.com/meta-llama/llama-stack-client-python), [node](https://github.com/meta-llama/llama-stack-client-node), [swift](https://github.com/meta-llama/llama-stack-client-swift), and [kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) programming languages to quickly build your applications.

View file

@ -49,7 +49,7 @@ class BatchChatCompletionResponse(BaseModel):
@runtime_checkable @runtime_checkable
class BatchInference(Protocol): class BatchInference(Protocol):
@webmethod(route="/batch_inference/completion") @webmethod(route="/batch-inference/completion")
async def batch_completion( async def batch_completion(
self, self,
model: str, model: str,
@ -58,7 +58,7 @@ class BatchInference(Protocol):
logprobs: Optional[LogProbConfig] = None, logprobs: Optional[LogProbConfig] = None,
) -> BatchCompletionResponse: ... ) -> BatchCompletionResponse: ...
@webmethod(route="/batch_inference/chat_completion") @webmethod(route="/batch-inference/chat-completion")
async def batch_chat_completion( async def batch_chat_completion(
self, self,
model: str, model: str,

View file

@ -29,7 +29,7 @@ class DatasetIO(Protocol):
# keeping for aligning with inference/safety, but this is not used # keeping for aligning with inference/safety, but this is not used
dataset_store: DatasetStore dataset_store: DatasetStore
@webmethod(route="/datasetio/get_rows_paginated", method="GET") @webmethod(route="/datasetio/get-rows-paginated", method="GET")
async def get_rows_paginated( async def get_rows_paginated(
self, self,
dataset_id: str, dataset_id: str,

View file

@ -74,14 +74,14 @@ class EvaluateResponse(BaseModel):
class Eval(Protocol): class Eval(Protocol):
@webmethod(route="/eval/run_eval", method="POST") @webmethod(route="/eval/run-eval", method="POST")
async def run_eval( async def run_eval(
self, self,
task_id: str, task_id: str,
task_config: EvalTaskConfig, task_config: EvalTaskConfig,
) -> Job: ... ) -> Job: ...
@webmethod(route="/eval/evaluate_rows", method="POST") @webmethod(route="/eval/evaluate-rows", method="POST")
async def evaluate_rows( async def evaluate_rows(
self, self,
task_id: str, task_id: str,

View file

@ -42,13 +42,13 @@ class EvalTaskInput(CommonEvalTaskFields, BaseModel):
@runtime_checkable @runtime_checkable
class EvalTasks(Protocol): class EvalTasks(Protocol):
@webmethod(route="/eval_tasks/list", method="GET") @webmethod(route="/eval-tasks/list", method="GET")
async def list_eval_tasks(self) -> List[EvalTask]: ... async def list_eval_tasks(self) -> List[EvalTask]: ...
@webmethod(route="/eval_tasks/get", method="GET") @webmethod(route="/eval-tasks/get", method="GET")
async def get_eval_task(self, name: str) -> Optional[EvalTask]: ... async def get_eval_task(self, name: str) -> Optional[EvalTask]: ...
@webmethod(route="/eval_tasks/register", method="POST") @webmethod(route="/eval-tasks/register", method="POST")
async def register_eval_task( async def register_eval_task(
self, self,
eval_task_id: str, eval_task_id: str,

View file

@ -234,7 +234,7 @@ class Inference(Protocol):
logprobs: Optional[LogProbConfig] = None, logprobs: Optional[LogProbConfig] = None,
) -> Union[CompletionResponse, AsyncIterator[CompletionResponseStreamChunk]]: ... ) -> Union[CompletionResponse, AsyncIterator[CompletionResponseStreamChunk]]: ...
@webmethod(route="/inference/chat_completion") @webmethod(route="/inference/chat-completion")
async def chat_completion( async def chat_completion(
self, self,
model_id: str, model_id: str,

View file

@ -130,13 +130,13 @@ class MemoryBankInput(BaseModel):
@runtime_checkable @runtime_checkable
class MemoryBanks(Protocol): class MemoryBanks(Protocol):
@webmethod(route="/memory_banks/list", method="GET") @webmethod(route="/memory-banks/list", method="GET")
async def list_memory_banks(self) -> List[MemoryBank]: ... async def list_memory_banks(self) -> List[MemoryBank]: ...
@webmethod(route="/memory_banks/get", method="GET") @webmethod(route="/memory-banks/get", method="GET")
async def get_memory_bank(self, memory_bank_id: str) -> Optional[MemoryBank]: ... async def get_memory_bank(self, memory_bank_id: str) -> Optional[MemoryBank]: ...
@webmethod(route="/memory_banks/register", method="POST") @webmethod(route="/memory-banks/register", method="POST")
async def register_memory_bank( async def register_memory_bank(
self, self,
memory_bank_id: str, memory_bank_id: str,
@ -145,5 +145,5 @@ class MemoryBanks(Protocol):
provider_memory_bank_id: Optional[str] = None, provider_memory_bank_id: Optional[str] = None,
) -> MemoryBank: ... ) -> MemoryBank: ...
@webmethod(route="/memory_banks/unregister", method="POST") @webmethod(route="/memory-banks/unregister", method="POST")
async def unregister_memory_bank(self, memory_bank_id: str) -> None: ... async def unregister_memory_bank(self, memory_bank_id: str) -> None: ...

View file

@ -31,6 +31,8 @@ class Model(CommonModelFields, Resource):
def provider_model_id(self) -> str: def provider_model_id(self) -> str:
return self.provider_resource_id return self.provider_resource_id
model_config = ConfigDict(protected_namespaces=())
class ModelInput(CommonModelFields): class ModelInput(CommonModelFields):
model_id: str model_id: str

View file

@ -176,7 +176,7 @@ class PostTrainingJobArtifactsResponse(BaseModel):
class PostTraining(Protocol): class PostTraining(Protocol):
@webmethod(route="/post_training/supervised_fine_tune") @webmethod(route="/post-training/supervised-fine-tune")
def supervised_fine_tune( def supervised_fine_tune(
self, self,
job_uuid: str, job_uuid: str,
@ -193,7 +193,7 @@ class PostTraining(Protocol):
logger_config: Dict[str, Any], logger_config: Dict[str, Any],
) -> PostTrainingJob: ... ) -> PostTrainingJob: ...
@webmethod(route="/post_training/preference_optimize") @webmethod(route="/post-training/preference-optimize")
def preference_optimize( def preference_optimize(
self, self,
job_uuid: str, job_uuid: str,
@ -208,22 +208,22 @@ class PostTraining(Protocol):
logger_config: Dict[str, Any], logger_config: Dict[str, Any],
) -> PostTrainingJob: ... ) -> PostTrainingJob: ...
@webmethod(route="/post_training/jobs") @webmethod(route="/post-training/jobs")
def get_training_jobs(self) -> List[PostTrainingJob]: ... def get_training_jobs(self) -> List[PostTrainingJob]: ...
# sends SSE stream of logs # sends SSE stream of logs
@webmethod(route="/post_training/job/logs") @webmethod(route="/post-training/job/logs")
def get_training_job_logstream(self, job_uuid: str) -> PostTrainingJobLogStream: ... def get_training_job_logstream(self, job_uuid: str) -> PostTrainingJobLogStream: ...
@webmethod(route="/post_training/job/status") @webmethod(route="/post-training/job/status")
def get_training_job_status( def get_training_job_status(
self, job_uuid: str self, job_uuid: str
) -> PostTrainingJobStatusResponse: ... ) -> PostTrainingJobStatusResponse: ...
@webmethod(route="/post_training/job/cancel") @webmethod(route="/post-training/job/cancel")
def cancel_training_job(self, job_uuid: str) -> None: ... def cancel_training_job(self, job_uuid: str) -> None: ...
@webmethod(route="/post_training/job/artifacts") @webmethod(route="/post-training/job/artifacts")
def get_training_job_artifacts( def get_training_job_artifacts(
self, job_uuid: str self, job_uuid: str
) -> PostTrainingJobArtifactsResponse: ... ) -> PostTrainingJobArtifactsResponse: ...

View file

@ -46,7 +46,7 @@ class ShieldStore(Protocol):
class Safety(Protocol): class Safety(Protocol):
shield_store: ShieldStore shield_store: ShieldStore
@webmethod(route="/safety/run_shield") @webmethod(route="/safety/run-shield")
async def run_shield( async def run_shield(
self, self,
shield_id: str, shield_id: str,

View file

@ -44,7 +44,7 @@ class ScoringFunctionStore(Protocol):
class Scoring(Protocol): class Scoring(Protocol):
scoring_function_store: ScoringFunctionStore scoring_function_store: ScoringFunctionStore
@webmethod(route="/scoring/score_batch") @webmethod(route="/scoring/score-batch")
async def score_batch( async def score_batch(
self, self,
dataset_id: str, dataset_id: str,

View file

@ -104,13 +104,13 @@ class ScoringFnInput(CommonScoringFnFields, BaseModel):
@runtime_checkable @runtime_checkable
class ScoringFunctions(Protocol): class ScoringFunctions(Protocol):
@webmethod(route="/scoring_functions/list", method="GET") @webmethod(route="/scoring-functions/list", method="GET")
async def list_scoring_functions(self) -> List[ScoringFn]: ... async def list_scoring_functions(self) -> List[ScoringFn]: ...
@webmethod(route="/scoring_functions/get", method="GET") @webmethod(route="/scoring-functions/get", method="GET")
async def get_scoring_function(self, scoring_fn_id: str) -> Optional[ScoringFn]: ... async def get_scoring_function(self, scoring_fn_id: str) -> Optional[ScoringFn]: ...
@webmethod(route="/scoring_functions/register", method="POST") @webmethod(route="/scoring-functions/register", method="POST")
async def register_scoring_function( async def register_scoring_function(
self, self,
scoring_fn_id: str, scoring_fn_id: str,

View file

@ -44,7 +44,7 @@ class SyntheticDataGenerationResponse(BaseModel):
class SyntheticDataGeneration(Protocol): class SyntheticDataGeneration(Protocol):
@webmethod(route="/synthetic_data_generation/generate") @webmethod(route="/synthetic-data-generation/generate")
def synthetic_data_generate( def synthetic_data_generate(
self, self,
dialogs: List[Message], dialogs: List[Message],

View file

@ -125,8 +125,8 @@ Event = Annotated[
@runtime_checkable @runtime_checkable
class Telemetry(Protocol): class Telemetry(Protocol):
@webmethod(route="/telemetry/log_event") @webmethod(route="/telemetry/log-event")
async def log_event(self, event: Event) -> None: ... async def log_event(self, event: Event) -> None: ...
@webmethod(route="/telemetry/get_trace", method="GET") @webmethod(route="/telemetry/get-trace", method="GET")
async def get_trace(self, trace_id: str) -> Trace: ... async def get_trace(self, trace_id: str) -> Trace: ...

View file

@ -0,0 +1,7 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
LLAMA_STACK_API_VERSION = "alpha"

View file

@ -19,7 +19,7 @@ import httpx
from llama_models.datatypes import Model from llama_models.datatypes import Model
from llama_models.sku_list import LlamaDownloadInfo from llama_models.sku_list import LlamaDownloadInfo
from pydantic import BaseModel from pydantic import BaseModel, ConfigDict
from rich.console import Console from rich.console import Console
from rich.progress import ( from rich.progress import (
@ -293,8 +293,8 @@ class ParallelDownloader:
if free_space < required_space: if free_space < required_space:
self.console.print( self.console.print(
f"[red]Not enough disk space. Required: {required_space // (1024*1024)} MB, " f"[red]Not enough disk space. Required: {required_space // (1024 * 1024)} MB, "
f"Available: {free_space // (1024*1024)} MB[/red]" f"Available: {free_space // (1024 * 1024)} MB[/red]"
) )
return False return False
return True return True
@ -413,8 +413,7 @@ class ModelEntry(BaseModel):
model_id: str model_id: str
files: Dict[str, str] files: Dict[str, str]
class Config: model_config = ConfigDict(protected_namespaces=())
protected_namespaces = ()
class Manifest(BaseModel): class Manifest(BaseModel):

View file

@ -8,10 +8,14 @@ import argparse
from llama_stack.cli.subcommand import Subcommand from llama_stack.cli.subcommand import Subcommand
from llama_stack.distribution.datatypes import * # noqa: F403 from llama_stack.distribution.datatypes import * # noqa: F403
import importlib
import os import os
import shutil
from functools import lru_cache from functools import lru_cache
from pathlib import Path from pathlib import Path
import pkg_resources
from llama_stack.distribution.distribution import get_provider_registry from llama_stack.distribution.distribution import get_provider_registry
from llama_stack.distribution.utils.dynamic import instantiate_class_type from llama_stack.distribution.utils.dynamic import instantiate_class_type
@ -99,7 +103,9 @@ class StackBuild(Subcommand):
self.parser.error( self.parser.error(
f"Please specify a image-type (docker | conda) for {args.template}" f"Please specify a image-type (docker | conda) for {args.template}"
) )
self._run_stack_build_command_from_build_config(build_config) self._run_stack_build_command_from_build_config(
build_config, template_name=args.template
)
return return
self.parser.error( self.parser.error(
@ -193,7 +199,6 @@ class StackBuild(Subcommand):
apis = list(build_config.distribution_spec.providers.keys()) apis = list(build_config.distribution_spec.providers.keys())
run_config = StackRunConfig( run_config = StackRunConfig(
built_at=datetime.now(),
docker_image=( docker_image=(
build_config.name build_config.name
if build_config.image_type == ImageType.docker.value if build_config.image_type == ImageType.docker.value
@ -217,15 +222,23 @@ class StackBuild(Subcommand):
provider_types = [provider_types] provider_types = [provider_types]
for i, provider_type in enumerate(provider_types): for i, provider_type in enumerate(provider_types):
p_spec = Provider( pid = provider_type.split("::")[-1]
provider_id=f"{provider_type}-{i}",
provider_type=provider_type,
config={},
)
config_type = instantiate_class_type( config_type = instantiate_class_type(
provider_registry[Api(api)][provider_type].config_class provider_registry[Api(api)][provider_type].config_class
) )
p_spec.config = config_type() if hasattr(config_type, "sample_run_config"):
config = config_type.sample_run_config(
__distro_dir__=f"distributions/{build_config.name}"
)
else:
config = {}
p_spec = Provider(
provider_id=f"{pid}-{i}" if len(provider_types) > 1 else pid,
provider_type=provider_type,
config=config,
)
run_config.providers[api].append(p_spec) run_config.providers[api].append(p_spec)
os.makedirs(build_dir, exist_ok=True) os.makedirs(build_dir, exist_ok=True)
@ -241,12 +254,13 @@ class StackBuild(Subcommand):
) )
def _run_stack_build_command_from_build_config( def _run_stack_build_command_from_build_config(
self, build_config: BuildConfig self, build_config: BuildConfig, template_name: Optional[str] = None
) -> None: ) -> None:
import json import json
import os import os
import yaml import yaml
from termcolor import cprint
from llama_stack.distribution.build import build_image from llama_stack.distribution.build import build_image
from llama_stack.distribution.utils.config_dirs import DISTRIBS_BASE_DIR from llama_stack.distribution.utils.config_dirs import DISTRIBS_BASE_DIR
@ -264,7 +278,29 @@ class StackBuild(Subcommand):
if return_code != 0: if return_code != 0:
return return
self._generate_run_config(build_config, build_dir) if template_name:
# copy run.yaml from template to build_dir instead of generating it again
template_path = pkg_resources.resource_filename(
"llama_stack", f"templates/{template_name}/run.yaml"
)
os.makedirs(build_dir, exist_ok=True)
run_config_file = build_dir / f"{build_config.name}-run.yaml"
shutil.copy(template_path, run_config_file)
module_name = f"llama_stack.templates.{template_name}"
module = importlib.import_module(module_name)
distribution_template = module.get_distribution_template()
cprint("Build Successful! Next steps: ", color="green")
env_vars = ", ".join(distribution_template.run_config_env_vars.keys())
cprint(
f" 1. Set the environment variables: {env_vars}",
color="green",
)
cprint(
f" 2. `llama stack run {run_config_file}`",
color="green",
)
else:
self._generate_run_config(build_config, build_dir)
def _run_template_list_cmd(self, args: argparse.Namespace) -> None: def _run_template_list_cmd(self, args: argparse.Namespace) -> None:
import json import json

View file

@ -39,6 +39,13 @@ class StackRun(Subcommand):
help="Disable IPv6 support", help="Disable IPv6 support",
default=False, default=False,
) )
self.parser.add_argument(
"--env",
action="append",
help="Environment variables to pass to the server in KEY=VALUE format. Can be specified multiple times.",
default=[],
metavar="KEY=VALUE",
)
def _run_stack_run_cmd(self, args: argparse.Namespace) -> None: def _run_stack_run_cmd(self, args: argparse.Namespace) -> None:
from pathlib import Path from pathlib import Path
@ -108,4 +115,16 @@ class StackRun(Subcommand):
if args.disable_ipv6: if args.disable_ipv6:
run_args.append("--disable-ipv6") run_args.append("--disable-ipv6")
for env_var in args.env:
if "=" not in env_var:
self.parser.error(
f"Environment variable '{env_var}' must be in KEY=VALUE format"
)
return
key, value = env_var.split("=", 1) # split on first = only
if not key:
self.parser.error(f"Environment variable '{env_var}' has empty key")
return
run_args.extend(["--env", f"{key}={value}"])
run_with_pty(run_args) run_with_pty(run_args)

View file

@ -146,6 +146,8 @@ fi
# Set version tag based on PyPI version # Set version tag based on PyPI version
if [ -n "$TEST_PYPI_VERSION" ]; then if [ -n "$TEST_PYPI_VERSION" ]; then
version_tag="test-$TEST_PYPI_VERSION" version_tag="test-$TEST_PYPI_VERSION"
elif [[ -n "$LLAMA_STACK_DIR" || -n "$LLAMA_MODELS_DIR" ]]; then
version_tag="dev"
else else
URL="https://pypi.org/pypi/llama-stack/json" URL="https://pypi.org/pypi/llama-stack/json"
version_tag=$(curl -s $URL | jq -r '.info.version') version_tag=$(curl -s $URL | jq -r '.info.version')

View file

@ -186,6 +186,5 @@ def parse_and_maybe_upgrade_config(config_dict: Dict[str, Any]) -> StackRunConfi
config_dict = upgrade_from_routing_table(config_dict) config_dict = upgrade_from_routing_table(config_dict)
config_dict["version"] = LLAMA_STACK_RUN_CONFIG_VERSION config_dict["version"] = LLAMA_STACK_RUN_CONFIG_VERSION
config_dict["built_at"] = datetime.now().isoformat()
return StackRunConfig(**config_dict) return StackRunConfig(**config_dict)

View file

@ -4,8 +4,6 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
from datetime import datetime
from typing import Dict, List, Optional, Union from typing import Dict, List, Optional, Union
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
@ -115,7 +113,6 @@ class Provider(BaseModel):
class StackRunConfig(BaseModel): class StackRunConfig(BaseModel):
version: str = LLAMA_STACK_RUN_CONFIG_VERSION version: str = LLAMA_STACK_RUN_CONFIG_VERSION
built_at: datetime
image_name: str = Field( image_name: str = Field(
..., ...,

View file

@ -4,12 +4,12 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
import argparse
import asyncio import asyncio
import functools import functools
import inspect import inspect
import json import json
import os import os
import re
import signal import signal
import sys import sys
import traceback import traceback
@ -19,7 +19,6 @@ from contextlib import asynccontextmanager
from ssl import SSLError from ssl import SSLError
from typing import Any, Dict, Optional from typing import Any, Dict, Optional
import fire
import httpx import httpx
import yaml import yaml
@ -41,7 +40,11 @@ from llama_stack.providers.utils.telemetry.tracing import (
from llama_stack.distribution.datatypes import * # noqa: F403 from llama_stack.distribution.datatypes import * # noqa: F403
from llama_stack.distribution.request_headers import set_request_provider_data from llama_stack.distribution.request_headers import set_request_provider_data
from llama_stack.distribution.resolver import InvalidProviderError from llama_stack.distribution.resolver import InvalidProviderError
from llama_stack.distribution.stack import construct_stack from llama_stack.distribution.stack import (
construct_stack,
replace_env_vars,
validate_env_pair,
)
from .endpoints import get_all_api_endpoints from .endpoints import get_all_api_endpoints
@ -271,64 +274,36 @@ def create_dynamic_typed_route(func: Any, method: str):
return endpoint return endpoint
class EnvVarError(Exception): def main():
def __init__(self, var_name: str, path: str = ""): """Start the LlamaStack server."""
self.var_name = var_name parser = argparse.ArgumentParser(description="Start the LlamaStack server.")
self.path = path parser.add_argument(
super().__init__( "--yaml-config",
f"Environment variable '{var_name}' not set or empty{f' at {path}' if path else ''}" default="llamastack-run.yaml",
) help="Path to YAML configuration file",
)
parser.add_argument("--port", type=int, default=5000, help="Port to listen on")
parser.add_argument(
"--disable-ipv6", action="store_true", help="Whether to disable IPv6 support"
)
parser.add_argument(
"--env",
action="append",
help="Environment variables in KEY=value format. Can be specified multiple times.",
)
args = parser.parse_args()
def replace_env_vars(config: Any, path: str = "") -> Any: if args.env:
if isinstance(config, dict): for env_pair in args.env:
result = {}
for k, v in config.items():
try: try:
result[k] = replace_env_vars(v, f"{path}.{k}" if path else k) key, value = validate_env_pair(env_pair)
except EnvVarError as e: print(f"Setting CLI environment variable {key} => {value}")
raise EnvVarError(e.var_name, e.path) from None os.environ[key] = value
return result except ValueError as e:
print(f"Error: {str(e)}")
sys.exit(1)
elif isinstance(config, list): with open(args.yaml_config, "r") as fp:
result = []
for i, v in enumerate(config):
try:
result.append(replace_env_vars(v, f"{path}[{i}]"))
except EnvVarError as e:
raise EnvVarError(e.var_name, e.path) from None
return result
elif isinstance(config, str):
pattern = r"\${env\.([A-Z0-9_]+)(?::([^}]*))?}"
def get_env_var(match):
env_var = match.group(1)
default_val = match.group(2)
value = os.environ.get(env_var)
if not value:
if default_val is None:
raise EnvVarError(env_var, path)
else:
value = default_val
return value
try:
return re.sub(pattern, get_env_var, config)
except EnvVarError as e:
raise EnvVarError(e.var_name, e.path) from None
return config
def main(
yaml_config: str = "llamastack-run.yaml",
port: int = 5000,
disable_ipv6: bool = False,
):
with open(yaml_config, "r") as fp:
config = replace_env_vars(yaml.safe_load(fp)) config = replace_env_vars(yaml.safe_load(fp))
config = StackRunConfig(**config) config = StackRunConfig(**config)
@ -395,10 +370,10 @@ def main(
# FYI this does not do hot-reloads # FYI this does not do hot-reloads
listen_host = ["::", "0.0.0.0"] if not disable_ipv6 else "0.0.0.0" listen_host = ["::", "0.0.0.0"] if not args.disable_ipv6 else "0.0.0.0"
print(f"Listening on {listen_host}:{port}") print(f"Listening on {listen_host}:{args.port}")
uvicorn.run(app, host=listen_host, port=port) uvicorn.run(app, host=listen_host, port=args.port)
if __name__ == "__main__": if __name__ == "__main__":
fire.Fire(main) main()

View file

@ -4,8 +4,13 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
import os
from pathlib import Path
from typing import Any, Dict from typing import Any, Dict
import pkg_resources
import yaml
from termcolor import colored from termcolor import colored
from llama_models.llama3.api.datatypes import * # noqa: F403 from llama_models.llama3.api.datatypes import * # noqa: F403
@ -35,6 +40,9 @@ from llama_stack.distribution.store.registry import create_dist_registry
from llama_stack.providers.datatypes import Api from llama_stack.providers.datatypes import Api
LLAMA_STACK_API_VERSION = "alpha"
class LlamaStack( class LlamaStack(
MemoryBanks, MemoryBanks,
Inference, Inference,
@ -92,6 +100,77 @@ async def register_resources(run_config: StackRunConfig, impls: Dict[Api, Any]):
print("") print("")
class EnvVarError(Exception):
def __init__(self, var_name: str, path: str = ""):
self.var_name = var_name
self.path = path
super().__init__(
f"Environment variable '{var_name}' not set or empty{f' at {path}' if path else ''}"
)
def replace_env_vars(config: Any, path: str = "") -> Any:
if isinstance(config, dict):
result = {}
for k, v in config.items():
try:
result[k] = replace_env_vars(v, f"{path}.{k}" if path else k)
except EnvVarError as e:
raise EnvVarError(e.var_name, e.path) from None
return result
elif isinstance(config, list):
result = []
for i, v in enumerate(config):
try:
result.append(replace_env_vars(v, f"{path}[{i}]"))
except EnvVarError as e:
raise EnvVarError(e.var_name, e.path) from None
return result
elif isinstance(config, str):
pattern = r"\${env\.([A-Z0-9_]+)(?::([^}]*))?}"
def get_env_var(match):
env_var = match.group(1)
default_val = match.group(2)
value = os.environ.get(env_var)
if not value:
if default_val is None:
raise EnvVarError(env_var, path)
else:
value = default_val
# expand "~" from the values
return os.path.expanduser(value)
try:
return re.sub(pattern, get_env_var, config)
except EnvVarError as e:
raise EnvVarError(e.var_name, e.path) from None
return config
def validate_env_pair(env_pair: str) -> tuple[str, str]:
"""Validate and split an environment variable key-value pair."""
try:
key, value = env_pair.split("=", 1)
key = key.strip()
if not key:
raise ValueError(f"Empty key in environment variable pair: {env_pair}")
if not all(c.isalnum() or c == "_" for c in key):
raise ValueError(
f"Key must contain only alphanumeric characters and underscores: {key}"
)
return key, value
except ValueError as e:
raise ValueError(
f"Invalid environment variable format '{env_pair}': {str(e)}. Expected format: KEY=value"
) from e
# Produces a stack of providers for the given run config. Not all APIs may be # Produces a stack of providers for the given run config. Not all APIs may be
# asked for in the run config. # asked for in the run config.
async def construct_stack( async def construct_stack(
@ -105,3 +184,17 @@ async def construct_stack(
) )
await register_resources(run_config, impls) await register_resources(run_config, impls)
return impls return impls
def get_stack_run_config_from_template(template: str) -> StackRunConfig:
template_path = pkg_resources.resource_filename(
"llama_stack", f"templates/{template}/run.yaml"
)
if not Path(template_path).exists():
raise ValueError(f"Template '{template}' not found at {template_path}")
with open(template_path) as f:
run_config = yaml.safe_load(f)
return StackRunConfig(**replace_env_vars(run_config))

View file

@ -33,10 +33,33 @@ shift
port="$1" port="$1"
shift shift
# Process environment variables from --env arguments
env_vars=""
while [[ $# -gt 0 ]]; do
case "$1" in
--env)
if [[ -n "$2" ]]; then
# collect environment variables so we can set them after activating the conda env
env_vars="$env_vars --env $2"
shift 2
else
echo -e "${RED}Error: --env requires a KEY=VALUE argument${NC}" >&2
exit 1
fi
;;
*)
shift
;;
esac
done
eval "$(conda shell.bash hook)" eval "$(conda shell.bash hook)"
conda deactivate && conda activate "$env_name" conda deactivate && conda activate "$env_name"
set -x
$CONDA_PREFIX/bin/python \ $CONDA_PREFIX/bin/python \
-m llama_stack.distribution.server.server \ -m llama_stack.distribution.server.server \
--yaml_config "$yaml_config" \ --yaml-config "$yaml_config" \
--port "$port" "$@" --port "$port" \
$env_vars

View file

@ -31,7 +31,7 @@ if [ $# -lt 3 ]; then
fi fi
build_name="$1" build_name="$1"
docker_image="distribution-$build_name" docker_image="localhost/distribution-$build_name"
shift shift
yaml_config="$1" yaml_config="$1"
@ -40,6 +40,26 @@ shift
port="$1" port="$1"
shift shift
# Process environment variables from --env arguments
env_vars=""
while [[ $# -gt 0 ]]; do
case "$1" in
--env)
echo "env = $2"
if [[ -n "$2" ]]; then
env_vars="$env_vars -e $2"
shift 2
else
echo -e "${RED}Error: --env requires a KEY=VALUE argument${NC}" >&2
exit 1
fi
;;
*)
shift
;;
esac
done
set -x set -x
if command -v selinuxenabled &> /dev/null && selinuxenabled; then if command -v selinuxenabled &> /dev/null && selinuxenabled; then
@ -59,15 +79,18 @@ fi
version_tag="latest" version_tag="latest"
if [ -n "$PYPI_VERSION" ]; then if [ -n "$PYPI_VERSION" ]; then
version_tag="$PYPI_VERSION" version_tag="$PYPI_VERSION"
elif [ -n "$LLAMA_STACK_DIR" ]; then
version_tag="dev"
elif [ -n "$TEST_PYPI_VERSION" ]; then elif [ -n "$TEST_PYPI_VERSION" ]; then
version_tag="test-$TEST_PYPI_VERSION" version_tag="test-$TEST_PYPI_VERSION"
fi fi
$DOCKER_BINARY run $DOCKER_OPTS -it \ $DOCKER_BINARY run $DOCKER_OPTS -it \
-p $port:$port \ -p $port:$port \
$env_vars \
-v "$yaml_config:/app/config.yaml" \ -v "$yaml_config:/app/config.yaml" \
$mounts \ $mounts \
$docker_image:$version_tag \ $docker_image:$version_tag \
python -m llama_stack.distribution.server.server \ python -m llama_stack.distribution.server.server \
--yaml_config /app/config.yaml \ --yaml-config /app/config.yaml \
--port $port "$@" --port "$port"

View file

@ -4,11 +4,22 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
from pydantic import BaseModel, Field from typing import Any, Dict
from pydantic import BaseModel
from llama_stack.providers.utils.kvstore import KVStoreConfig from llama_stack.providers.utils.kvstore import KVStoreConfig
from llama_stack.providers.utils.kvstore.config import SqliteKVStoreConfig from llama_stack.providers.utils.kvstore.config import SqliteKVStoreConfig
class MetaReferenceAgentsImplConfig(BaseModel): class MetaReferenceAgentsImplConfig(BaseModel):
persistence_store: KVStoreConfig = Field(default=SqliteKVStoreConfig()) persistence_store: KVStoreConfig
@classmethod
def sample_run_config(cls, __distro_dir__: str) -> Dict[str, Any]:
return {
"persistence_store": SqliteKVStoreConfig.sample_run_config(
__distro_dir__=__distro_dir__,
db_name="agents_store.db",
)
}

View file

@ -22,6 +22,7 @@ async def get_provider_impl(
deps[Api.datasets], deps[Api.datasets],
deps[Api.scoring], deps[Api.scoring],
deps[Api.inference], deps[Api.inference],
deps[Api.agents],
) )
await impl.initialize() await impl.initialize()
return impl return impl

View file

@ -9,6 +9,7 @@ from llama_models.llama3.api.datatypes import * # noqa: F403
from .....apis.common.job_types import Job from .....apis.common.job_types import Job
from .....apis.eval.eval import Eval, EvalTaskConfig, EvaluateResponse, JobStatus from .....apis.eval.eval import Eval, EvalTaskConfig, EvaluateResponse, JobStatus
from llama_stack.apis.common.type_system import * # noqa: F403 from llama_stack.apis.common.type_system import * # noqa: F403
from llama_stack.apis.agents import Agents
from llama_stack.apis.datasetio import DatasetIO from llama_stack.apis.datasetio import DatasetIO
from llama_stack.apis.datasets import Datasets from llama_stack.apis.datasets import Datasets
from llama_stack.apis.eval_tasks import EvalTask from llama_stack.apis.eval_tasks import EvalTask
@ -39,12 +40,14 @@ class MetaReferenceEvalImpl(Eval, EvalTasksProtocolPrivate):
datasets_api: Datasets, datasets_api: Datasets,
scoring_api: Scoring, scoring_api: Scoring,
inference_api: Inference, inference_api: Inference,
agents_api: Agents,
) -> None: ) -> None:
self.config = config self.config = config
self.datasetio_api = datasetio_api self.datasetio_api = datasetio_api
self.datasets_api = datasets_api self.datasets_api = datasets_api
self.scoring_api = scoring_api self.scoring_api = scoring_api
self.inference_api = inference_api self.inference_api = inference_api
self.agents_api = agents_api
# TODO: assume sync job, will need jobs API for async scheduling # TODO: assume sync job, will need jobs API for async scheduling
self.jobs = {} self.jobs = {}
@ -126,18 +129,50 @@ class MetaReferenceEvalImpl(Eval, EvalTasksProtocolPrivate):
self.jobs[job_id] = res self.jobs[job_id] = res
return Job(job_id=job_id) return Job(job_id=job_id)
async def evaluate_rows( async def _run_agent_generation(
self, self, input_rows: List[Dict[str, Any]], task_config: EvalTaskConfig
task_id: str, ) -> List[Dict[str, Any]]:
input_rows: List[Dict[str, Any]],
scoring_functions: List[str],
task_config: EvalTaskConfig,
) -> EvaluateResponse:
candidate = task_config.eval_candidate candidate = task_config.eval_candidate
if candidate.type == "agent": create_response = await self.agents_api.create_agent(candidate.config)
raise NotImplementedError( agent_id = create_response.agent_id
"Evaluation with generation has not been implemented for agents"
generations = []
for i, x in tqdm(enumerate(input_rows)):
assert ColumnName.chat_completion_input.value in x, "Invalid input row"
input_messages = eval(str(x[ColumnName.chat_completion_input.value]))
input_messages = [UserMessage(**x) for x in input_messages]
# NOTE: only single-turn agent generation is supported. Create a new session for each input row
session_create_response = await self.agents_api.create_agent_session(
agent_id, f"session-{i}"
) )
session_id = session_create_response.session_id
turn_request = dict(
agent_id=agent_id,
session_id=session_id,
messages=input_messages,
stream=True,
)
turn_response = [
chunk
async for chunk in await self.agents_api.create_agent_turn(
**turn_request
)
]
final_event = turn_response[-1].event.payload
generations.append(
{
ColumnName.generated_answer.value: final_event.turn.output_message.content
}
)
return generations
async def _run_model_generation(
self, input_rows: List[Dict[str, Any]], task_config: EvalTaskConfig
) -> List[Dict[str, Any]]:
candidate = task_config.eval_candidate
assert ( assert (
candidate.sampling_params.max_tokens is not None candidate.sampling_params.max_tokens is not None
), "SamplingParams.max_tokens must be provided" ), "SamplingParams.max_tokens must be provided"
@ -179,6 +214,23 @@ class MetaReferenceEvalImpl(Eval, EvalTasksProtocolPrivate):
else: else:
raise ValueError("Invalid input row") raise ValueError("Invalid input row")
return generations
async def evaluate_rows(
self,
task_id: str,
input_rows: List[Dict[str, Any]],
scoring_functions: List[str],
task_config: EvalTaskConfig,
) -> EvaluateResponse:
candidate = task_config.eval_candidate
if candidate.type == "agent":
generations = await self._run_agent_generation(input_rows, task_config)
elif candidate.type == "model":
generations = await self._run_model_generation(input_rows, task_config)
else:
raise ValueError(f"Invalid candidate type: {candidate.type}")
# scoring with generated_answer # scoring with generated_answer
score_input_rows = [ score_input_rows = [
input_r | generated_r input_r | generated_r

View file

@ -49,6 +49,18 @@ class MetaReferenceInferenceConfig(BaseModel):
resolved = resolve_model(self.model) resolved = resolve_model(self.model)
return resolved.pth_file_count return resolved.pth_file_count
@classmethod
def sample_run_config(
cls,
model: str = "Llama3.2-3B-Instruct",
checkpoint_dir: str = "${env.CHECKPOINT_DIR:null}",
) -> Dict[str, Any]:
return {
"model": model,
"max_seq_len": 4096,
"checkpoint_dir": checkpoint_dir,
}
class MetaReferenceQuantizedInferenceConfig(MetaReferenceInferenceConfig): class MetaReferenceQuantizedInferenceConfig(MetaReferenceInferenceConfig):
quantization: QuantizationConfig quantization: QuantizationConfig

View file

@ -107,7 +107,7 @@ class Llama:
sys.stdout = open(os.devnull, "w") sys.stdout = open(os.devnull, "w")
start_time = time.time() start_time = time.time()
if config.checkpoint_dir: if config.checkpoint_dir and config.checkpoint_dir != "null":
ckpt_dir = config.checkpoint_dir ckpt_dir = config.checkpoint_dir
else: else:
ckpt_dir = model_checkpoint_dir(model) ckpt_dir = model_checkpoint_dir(model)
@ -137,7 +137,6 @@ class Llama:
), f"model_args vocab = {model_args.vocab_size} but tokenizer vocab = {tokenizer.n_words}" ), f"model_args vocab = {model_args.vocab_size} but tokenizer vocab = {tokenizer.n_words}"
if isinstance(config, MetaReferenceQuantizedInferenceConfig): if isinstance(config, MetaReferenceQuantizedInferenceConfig):
if isinstance(config.quantization, Fp8QuantizationConfig): if isinstance(config.quantization, Fp8QuantizationConfig):
from .quantization.loader import convert_to_fp8_quantized_model from .quantization.loader import convert_to_fp8_quantized_model

View file

@ -34,6 +34,16 @@ class VLLMConfig(BaseModel):
default=0.3, default=0.3,
) )
@classmethod
def sample_run_config(cls):
return {
"model": "${env.VLLM_INFERENCE_MODEL:Llama3.2-3B-Instruct}",
"tensor_parallel_size": "${env.VLLM_TENSOR_PARALLEL_SIZE:1}",
"max_tokens": "${env.VLLM_MAX_TOKENS:4096}",
"enforce_eager": "${env.VLLM_ENFORCE_EAGER:False}",
"gpu_memory_utilization": "${env.VLLM_GPU_MEMORY_UTILIZATION:0.3}",
}
@field_validator("model") @field_validator("model")
@classmethod @classmethod
def validate_model(cls, model: str) -> str: def validate_model(cls, model: str) -> str:

View file

@ -4,10 +4,11 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
from typing import Any, Dict
from llama_models.schema_utils import json_schema_type from llama_models.schema_utils import json_schema_type
from pydantic import BaseModel from pydantic import BaseModel
from llama_stack.distribution.utils.config_dirs import RUNTIME_BASE_DIR
from llama_stack.providers.utils.kvstore.config import ( from llama_stack.providers.utils.kvstore.config import (
KVStoreConfig, KVStoreConfig,
SqliteKVStoreConfig, SqliteKVStoreConfig,
@ -16,6 +17,13 @@ from llama_stack.providers.utils.kvstore.config import (
@json_schema_type @json_schema_type
class FaissImplConfig(BaseModel): class FaissImplConfig(BaseModel):
kvstore: KVStoreConfig = SqliteKVStoreConfig( kvstore: KVStoreConfig
db_path=(RUNTIME_BASE_DIR / "faiss_store.db").as_posix()
) # Uses SQLite config specific to FAISS storage @classmethod
def sample_run_config(cls, __distro_dir__: str) -> Dict[str, Any]:
return {
"kvstore": SqliteKVStoreConfig.sample_run_config(
__distro_dir__=__distro_dir__,
db_name="faiss_store.db",
)
}

View file

@ -73,18 +73,21 @@ DEFAULT_LG_V3_SAFETY_CATEGORIES = [
CAT_ELECTIONS, CAT_ELECTIONS,
] ]
LLAMA_GUARD_MODEL_IDS = [ # accept both CoreModelId and huggingface repo id
CoreModelId.llama_guard_3_8b.value, LLAMA_GUARD_MODEL_IDS = {
CoreModelId.llama_guard_3_1b.value, CoreModelId.llama_guard_3_8b.value: "meta-llama/Llama-Guard-3-8B",
CoreModelId.llama_guard_3_11b_vision.value, "meta-llama/Llama-Guard-3-8B": "meta-llama/Llama-Guard-3-8B",
] CoreModelId.llama_guard_3_1b.value: "meta-llama/Llama-Guard-3-1B",
"meta-llama/Llama-Guard-3-1B": "meta-llama/Llama-Guard-3-1B",
CoreModelId.llama_guard_3_11b_vision.value: "meta-llama/Llama-Guard-3-11B-Vision",
"meta-llama/Llama-Guard-3-11B-Vision": "meta-llama/Llama-Guard-3-11B-Vision",
}
MODEL_TO_SAFETY_CATEGORIES_MAP = { MODEL_TO_SAFETY_CATEGORIES_MAP = {
CoreModelId.llama_guard_3_8b.value: ( "meta-llama/Llama-Guard-3-8B": DEFAULT_LG_V3_SAFETY_CATEGORIES
DEFAULT_LG_V3_SAFETY_CATEGORIES + [CAT_CODE_INTERPRETER_ABUSE] + [CAT_CODE_INTERPRETER_ABUSE],
), "meta-llama/Llama-Guard-3-1B": DEFAULT_LG_V3_SAFETY_CATEGORIES,
CoreModelId.llama_guard_3_1b.value: DEFAULT_LG_V3_SAFETY_CATEGORIES, "meta-llama/Llama-Guard-3-11B-Vision": DEFAULT_LG_V3_SAFETY_CATEGORIES,
CoreModelId.llama_guard_3_11b_vision.value: DEFAULT_LG_V3_SAFETY_CATEGORIES,
} }
@ -150,8 +153,9 @@ class LlamaGuardSafetyImpl(Safety, ShieldsProtocolPrivate):
if len(messages) > 0 and messages[0].role != Role.user.value: if len(messages) > 0 and messages[0].role != Role.user.value:
messages[0] = UserMessage(content=messages[0].content) messages[0] = UserMessage(content=messages[0].content)
model = LLAMA_GUARD_MODEL_IDS[shield.provider_resource_id]
impl = LlamaGuardShield( impl = LlamaGuardShield(
model=shield.provider_resource_id, model=model,
inference_api=self.inference_api, inference_api=self.inference_api,
excluded_categories=self.config.excluded_categories, excluded_categories=self.config.excluded_categories,
) )

View file

@ -0,0 +1,91 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
from llama_stack.apis.common.type_system import NumberType
from llama_stack.apis.scoring_functions import LLMAsJudgeScoringFnParams, ScoringFn
GRADER_TEMPLATE = """
Your job is to look at a question, a gold target, and a predicted answer, and then assign a grade of either ["CORRECT", "INCORRECT", "NOT_ATTEMPTED"].
First, I will give examples of each grade, and then you will grade a new example.
The following are examples of CORRECT predicted answers.
```
Question: What are the names of Barack Obama's children?
Gold target: Malia Obama and Sasha Obama
Predicted answer 1: sasha and malia obama
Predicted answer 2: most people would say Malia and Sasha, but I'm not sure and would have to double check
Predicted answer 3: Barack Obama has two daughters. Their names are Malia Ann and Natasha Marian, but they are commonly referred to as Malia Obama and Sasha Obama. Malia was born on July 4, 1998, and Sasha was born on June 10, 2001.
```
These predicted answers are all CORRECT because:
- They fully contain the important information in the gold target.
- They do not contain any information that contradicts the gold target.
- Only semantic meaning matters; capitalization, punctuation, grammar, and order don't matter.
- Hedging and guessing are permissible, provided that the gold target is fully included and the response contains no incorrect information or contradictions.
The following are examples of INCORRECT predicted answers.
```
Question: What are the names of Barack Obama's children?
Gold target: Malia and Sasha
Predicted answer 1: Malia.
Predicted answer 2: Malia, Sasha, and Susan.
Predicted answer 3: Barack Obama does not have any children.
Predicted answer 4: I think it's either Malia and Sasha. Or it could be Malia and Jackie. Or it could be Joey and Malia.
Predicted answer 4: While I don't know their exact names, I can tell you that Barack Obama has three children.
Predicted answer 5: It's possible you may mean Betsy and Olivia. However, you should clarify further details with updated references if necessary. Is that the correct answer?
Predicted answer 6: It may be the case that Obama's child is named James. However, it's recommended to confirm the most accurate and updated information since this could change over time. This model may not always reflect the most current information.
```
These predicted answers are all INCORRECT because:
- A factual statement in the answer contradicts the gold target. Incorrect statements that have some hedging (e.g., "it is possible that", "although i'm not sure, i think") are also considered incorrect.
The following are examples of NOT_ATTEMPTED predicted answers.
```
Question: What are the names of Barack Obama's children?
Gold target: Malia and Sasha
Predicted answer 1: I don't know.
Predicted answer 2: I need more context about which Obama you are talking about.
Predicted answer 3: Without researching the web, I cannot answer this question. However, I can tell you that Barack Obama has two children.
Predicted answer 4: Barack Obama has two children. I know that one of them is Malia, but I'm not sure about the other one.
```
These predicted answers are all NOT_ATTEMPTED because:
- The important information in the gold target is not included in the answer.
- No statements in the answer contradict the gold target.
Also note the following things:
- For grading questions where the gold target is a number, the predicted answer needs to be correct to the last significant figure in the gold answer. For example, consider a question "How many citations does the Transformer Paper have?" with gold target "120k".
- Predicted answers "120k", "124k", and 115k" are all CORRECT.
- Predicted answers "100k" and "113k" are INCORRECT.
- Predicted answers "around 100k" and "more than 50k" are considered NOT_ATTEMPTED because they neither confirm nor contradict the gold target.
- The gold target may contain more information than the question. In such cases, the predicted answer only needs to contain the information that is in the question.
- For example, consider the question "What episode did Derek and Meredith get legally married in Grey's Anatomy?" with gold target "Season 7, Episode 20: White Wedding". Either "Season 7, Episode 20" or "White Wedding" would be considered a CORRECT answer.
- Do not punish predicted answers if they omit information that would be clearly inferred from the question.
- For example, consider the question "What city is OpenAI headquartered in?" and the gold target "San Francisco, California". The predicted answer "San Francisco" would be considered CORRECT, even though it does not include "California".
- Consider the question "What award did A pretrainer's guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity win at NAACL '24?", the gold target is "Outstanding Paper Award". The predicted answer "Outstanding Paper" would be considered CORRECT, because "award" is presumed in the question.
- For the question "What is the height of Jason Wei in meters?", the gold target is "1.73 m". The predicted answer "1.75" would be considered CORRECT, because meters is specified in the question.
- For the question "What is the name of Barack Obama's wife?", the gold target is "Michelle Obama". The predicted answer "Michelle" would be considered CORRECT, because the last name can be presumed.
- Do not punish for typos in people's name if it's clearly the same name.
- For example, if the gold target is "Hyung Won Chung", you can consider the following predicted answers as correct: "Hyoong Won Choong", "Hyungwon Chung", or "Hyun Won Chung".
Here is a new example. Simply reply with either CORRECT, INCORRECT, NOT ATTEMPTED. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
```
Question: {input_query}
Gold target: {expected_answer}
Predicted answer: {generated_answer}
```
Grade the predicted answer of this new question as one of:
A: CORRECT
B: INCORRECT
C: NOT_ATTEMPTED
Just return the letters "A", "B", or "C", with no text around it.
""".strip()
llm_as_judge_405b_simpleqa = ScoringFn(
identifier="llm-as-judge::405b-simpleqa",
description="Llm As Judge Scoring Function for SimpleQA Benchmark (https://github.com/openai/simple-evals/blob/main/simpleqa_eval.py)",
return_type=NumberType(),
provider_id="llm-as-judge",
provider_resource_id="llm-as-judge-405b-simpleqa",
params=LLMAsJudgeScoringFnParams(
judge_model="Llama3.1-405B-Instruct",
prompt_template=GRADER_TEMPLATE,
judge_score_regexes=[r"(A|B|C)"],
),
)

View file

@ -9,7 +9,7 @@ from llama_stack.apis.scoring_functions import ScoringFn
llm_as_judge_base = ScoringFn( llm_as_judge_base = ScoringFn(
identifier="llm-as-judge::llm_as_judge_base", identifier="llm-as-judge::base",
description="Llm As Judge Scoring Function", description="Llm As Judge Scoring Function",
return_type=NumberType(), return_type=NumberType(),
provider_id="llm-as-judge", provider_id="llm-as-judge",

View file

@ -11,6 +11,8 @@ from llama_stack.apis.scoring import * # noqa: F401, F403
from llama_stack.apis.common.type_system import * # noqa: F403 from llama_stack.apis.common.type_system import * # noqa: F403
import re import re
from .fn_defs.llm_as_judge_405b_simpleqa import llm_as_judge_405b_simpleqa
from .fn_defs.llm_as_judge_base import llm_as_judge_base from .fn_defs.llm_as_judge_base import llm_as_judge_base
@ -24,6 +26,7 @@ class LlmAsJudgeScoringFn(BaseScoringFn):
self.inference_api = inference_api self.inference_api = inference_api
self.supported_fn_defs_registry = { self.supported_fn_defs_registry = {
llm_as_judge_base.identifier: llm_as_judge_base, llm_as_judge_base.identifier: llm_as_judge_base,
llm_as_judge_405b_simpleqa.identifier: llm_as_judge_405b_simpleqa,
} }
async def score_row( async def score_row(

View file

@ -22,6 +22,7 @@ def available_providers() -> List[ProviderSpec]:
Api.datasets, Api.datasets,
Api.scoring, Api.scoring,
Api.inference, Api.inference,
Api.agents,
], ],
), ),
] ]

View file

@ -4,7 +4,7 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
from typing import Optional from typing import Any, Dict, Optional
from llama_models.schema_utils import json_schema_type from llama_models.schema_utils import json_schema_type
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
@ -20,3 +20,10 @@ class FireworksImplConfig(BaseModel):
default=None, default=None,
description="The Fireworks.ai API Key", description="The Fireworks.ai API Key",
) )
@classmethod
def sample_run_config(cls) -> Dict[str, Any]:
return {
"url": "https://api.fireworks.ai/inference",
"api_key": "${env.FIREWORKS_API_KEY}",
}

View file

@ -35,7 +35,7 @@ from llama_stack.providers.utils.inference.prompt_adapter import (
from .config import FireworksImplConfig from .config import FireworksImplConfig
model_aliases = [ MODEL_ALIASES = [
build_model_alias( build_model_alias(
"fireworks/llama-v3p1-8b-instruct", "fireworks/llama-v3p1-8b-instruct",
CoreModelId.llama3_1_8b_instruct.value, CoreModelId.llama3_1_8b_instruct.value,
@ -79,7 +79,7 @@ class FireworksInferenceAdapter(
ModelRegistryHelper, Inference, NeedsRequestProviderData ModelRegistryHelper, Inference, NeedsRequestProviderData
): ):
def __init__(self, config: FireworksImplConfig) -> None: def __init__(self, config: FireworksImplConfig) -> None:
ModelRegistryHelper.__init__(self, model_aliases) ModelRegistryHelper.__init__(self, MODEL_ALIASES)
self.config = config self.config = config
self.formatter = ChatFormat(Tokenizer.get_instance()) self.formatter = ChatFormat(Tokenizer.get_instance())

View file

@ -30,7 +30,7 @@ from llama_stack.apis.inference import (
ResponseFormat, ResponseFormat,
) )
from llama_stack.providers.utils.inference.model_registry import ( from llama_stack.providers.utils.inference.model_registry import (
build_model_alias, build_model_alias_with_just_provider_model_id,
ModelRegistryHelper, ModelRegistryHelper,
) )
@ -43,39 +43,39 @@ from ._openai_utils import (
from ._utils import check_health from ._utils import check_health
_MODEL_ALIASES = [ _MODEL_ALIASES = [
build_model_alias( build_model_alias_with_just_provider_model_id(
"meta/llama3-8b-instruct", "meta/llama3-8b-instruct",
CoreModelId.llama3_8b_instruct.value, CoreModelId.llama3_8b_instruct.value,
), ),
build_model_alias( build_model_alias_with_just_provider_model_id(
"meta/llama3-70b-instruct", "meta/llama3-70b-instruct",
CoreModelId.llama3_70b_instruct.value, CoreModelId.llama3_70b_instruct.value,
), ),
build_model_alias( build_model_alias_with_just_provider_model_id(
"meta/llama-3.1-8b-instruct", "meta/llama-3.1-8b-instruct",
CoreModelId.llama3_1_8b_instruct.value, CoreModelId.llama3_1_8b_instruct.value,
), ),
build_model_alias( build_model_alias_with_just_provider_model_id(
"meta/llama-3.1-70b-instruct", "meta/llama-3.1-70b-instruct",
CoreModelId.llama3_1_70b_instruct.value, CoreModelId.llama3_1_70b_instruct.value,
), ),
build_model_alias( build_model_alias_with_just_provider_model_id(
"meta/llama-3.1-405b-instruct", "meta/llama-3.1-405b-instruct",
CoreModelId.llama3_1_405b_instruct.value, CoreModelId.llama3_1_405b_instruct.value,
), ),
build_model_alias( build_model_alias_with_just_provider_model_id(
"meta/llama-3.2-1b-instruct", "meta/llama-3.2-1b-instruct",
CoreModelId.llama3_2_1b_instruct.value, CoreModelId.llama3_2_1b_instruct.value,
), ),
build_model_alias( build_model_alias_with_just_provider_model_id(
"meta/llama-3.2-3b-instruct", "meta/llama-3.2-3b-instruct",
CoreModelId.llama3_2_3b_instruct.value, CoreModelId.llama3_2_3b_instruct.value,
), ),
build_model_alias( build_model_alias_with_just_provider_model_id(
"meta/llama-3.2-11b-vision-instruct", "meta/llama-3.2-11b-vision-instruct",
CoreModelId.llama3_2_11b_vision_instruct.value, CoreModelId.llama3_2_11b_vision_instruct.value,
), ),
build_model_alias( build_model_alias_with_just_provider_model_id(
"meta/llama-3.2-90b-vision-instruct", "meta/llama-3.2-90b-vision-instruct",
CoreModelId.llama3_2_90b_vision_instruct.value, CoreModelId.llama3_2_90b_vision_instruct.value,
), ),

View file

@ -4,14 +4,10 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
from llama_stack.distribution.datatypes import RemoteProviderConfig from .config import OllamaImplConfig
class OllamaImplConfig(RemoteProviderConfig): async def get_adapter_impl(config: OllamaImplConfig, _deps):
port: int = 11434
async def get_adapter_impl(config: RemoteProviderConfig, _deps):
from .ollama import OllamaInferenceAdapter from .ollama import OllamaInferenceAdapter
impl = OllamaInferenceAdapter(config.url) impl = OllamaInferenceAdapter(config.url)

View file

@ -0,0 +1,22 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
from typing import Any, Dict
from pydantic import BaseModel
DEFAULT_OLLAMA_URL = "http://localhost:11434"
class OllamaImplConfig(BaseModel):
url: str = DEFAULT_OLLAMA_URL
@classmethod
def sample_run_config(
cls, url: str = "${env.OLLAMA_URL:http://localhost:11434}", **kwargs
) -> Dict[str, Any]:
return {"url": url}

View file

@ -16,6 +16,7 @@ from ollama import AsyncClient
from llama_stack.providers.utils.inference.model_registry import ( from llama_stack.providers.utils.inference.model_registry import (
build_model_alias, build_model_alias,
build_model_alias_with_just_provider_model_id,
ModelRegistryHelper, ModelRegistryHelper,
) )
@ -44,10 +45,18 @@ model_aliases = [
"llama3.1:8b-instruct-fp16", "llama3.1:8b-instruct-fp16",
CoreModelId.llama3_1_8b_instruct.value, CoreModelId.llama3_1_8b_instruct.value,
), ),
build_model_alias_with_just_provider_model_id(
"llama3.1:8b",
CoreModelId.llama3_1_8b_instruct.value,
),
build_model_alias( build_model_alias(
"llama3.1:70b-instruct-fp16", "llama3.1:70b-instruct-fp16",
CoreModelId.llama3_1_70b_instruct.value, CoreModelId.llama3_1_70b_instruct.value,
), ),
build_model_alias_with_just_provider_model_id(
"llama3.1:70b",
CoreModelId.llama3_1_70b_instruct.value,
),
build_model_alias( build_model_alias(
"llama3.2:1b-instruct-fp16", "llama3.2:1b-instruct-fp16",
CoreModelId.llama3_2_1b_instruct.value, CoreModelId.llama3_2_1b_instruct.value,
@ -56,6 +65,24 @@ model_aliases = [
"llama3.2:3b-instruct-fp16", "llama3.2:3b-instruct-fp16",
CoreModelId.llama3_2_3b_instruct.value, CoreModelId.llama3_2_3b_instruct.value,
), ),
build_model_alias_with_just_provider_model_id(
"llama3.2:1b",
CoreModelId.llama3_2_1b_instruct.value,
),
build_model_alias_with_just_provider_model_id(
"llama3.2:3b",
CoreModelId.llama3_2_3b_instruct.value,
),
build_model_alias(
"llama3.2-vision:11b-instruct-fp16",
CoreModelId.llama3_2_11b_vision_instruct.value,
),
build_model_alias_with_just_provider_model_id(
"llama3.2-vision",
CoreModelId.llama3_2_11b_vision_instruct.value,
),
# The Llama Guard models don't have their full fp16 versions
# so we are going to alias their default version to the canonical SKU
build_model_alias( build_model_alias(
"llama-guard3:8b", "llama-guard3:8b",
CoreModelId.llama_guard_3_8b.value, CoreModelId.llama_guard_3_8b.value,
@ -64,10 +91,6 @@ model_aliases = [
"llama-guard3:1b", "llama-guard3:1b",
CoreModelId.llama_guard_3_1b.value, CoreModelId.llama_guard_3_1b.value,
), ),
build_model_alias(
"x/llama3.2-vision:11b-instruct-fp16",
CoreModelId.llama3_2_11b_vision_instruct.value,
),
] ]
@ -82,7 +105,7 @@ class OllamaInferenceAdapter(Inference, ModelsProtocolPrivate):
return AsyncClient(host=self.url) return AsyncClient(host=self.url)
async def initialize(self) -> None: async def initialize(self) -> None:
print("Initializing Ollama, checking connectivity to server...") print(f"checking connectivity to Ollama at `{self.url}`...")
try: try:
await self.client.ps() await self.client.ps()
except httpx.ConnectError as e: except httpx.ConnectError as e:

View file

@ -12,19 +12,20 @@ from pydantic import BaseModel, Field
@json_schema_type @json_schema_type
class TGIImplConfig(BaseModel): class TGIImplConfig(BaseModel):
host: str = "localhost" url: str = Field(
port: int = 8080 description="The URL for the TGI serving endpoint",
protocol: str = "http" )
@property
def url(self) -> str:
return f"{self.protocol}://{self.host}:{self.port}"
api_token: Optional[str] = Field( api_token: Optional[str] = Field(
default=None, default=None,
description="A bearer token if your TGI endpoint is protected.", description="A bearer token if your TGI endpoint is protected.",
) )
@classmethod
def sample_run_config(cls, url: str = "${env.TGI_URL}", **kwargs):
return {
"url": url,
}
@json_schema_type @json_schema_type
class InferenceEndpointImplConfig(BaseModel): class InferenceEndpointImplConfig(BaseModel):

View file

@ -4,7 +4,7 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
from typing import Optional from typing import Any, Dict, Optional
from llama_models.schema_utils import json_schema_type from llama_models.schema_utils import json_schema_type
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
@ -20,3 +20,10 @@ class TogetherImplConfig(BaseModel):
default=None, default=None,
description="The Together AI API Key", description="The Together AI API Key",
) )
@classmethod
def sample_run_config(cls, **kwargs) -> Dict[str, Any]:
return {
"url": "https://api.together.xyz/v1",
"api_key": "${env.TOGETHER_API_KEY}",
}

View file

@ -38,7 +38,7 @@ from llama_stack.providers.utils.inference.prompt_adapter import (
from .config import TogetherImplConfig from .config import TogetherImplConfig
model_aliases = [ MODEL_ALIASES = [
build_model_alias( build_model_alias(
"meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo", "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
CoreModelId.llama3_1_8b_instruct.value, CoreModelId.llama3_1_8b_instruct.value,
@ -78,7 +78,7 @@ class TogetherInferenceAdapter(
ModelRegistryHelper, Inference, NeedsRequestProviderData ModelRegistryHelper, Inference, NeedsRequestProviderData
): ):
def __init__(self, config: TogetherImplConfig) -> None: def __init__(self, config: TogetherImplConfig) -> None:
ModelRegistryHelper.__init__(self, model_aliases) ModelRegistryHelper.__init__(self, MODEL_ALIASES)
self.config = config self.config = config
self.formatter = ChatFormat(Tokenizer.get_instance()) self.formatter = ChatFormat(Tokenizer.get_instance())

View file

@ -24,3 +24,15 @@ class VLLMInferenceAdapterConfig(BaseModel):
default="fake", default="fake",
description="The API token", description="The API token",
) )
@classmethod
def sample_run_config(
cls,
url: str = "${env.VLLM_URL}",
**kwargs,
):
return {
"url": url,
"max_tokens": "${env.VLLM_MAX_TOKENS:4096}",
"api_token": "${env.VLLM_API_TOKEN:fake}",
}

View file

@ -44,7 +44,7 @@ Finally, you can override the model completely by doing:
```bash ```bash
pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \ pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
-m fireworks \ -m fireworks \
--inference-model "Llama3.1-70B-Instruct" \ --inference-model "meta-llama/Llama3.1-70B-Instruct" \
--env FIREWORKS_API_KEY=<...> --env FIREWORKS_API_KEY=<...>
``` ```

View file

@ -81,13 +81,13 @@ def pytest_addoption(parser):
parser.addoption( parser.addoption(
"--inference-model", "--inference-model",
action="store", action="store",
default="Llama3.1-8B-Instruct", default="meta-llama/Llama-3.1-8B-Instruct",
help="Specify the inference model to use for testing", help="Specify the inference model to use for testing",
) )
parser.addoption( parser.addoption(
"--safety-shield", "--safety-shield",
action="store", action="store",
default="Llama-Guard-3-8B", default="meta-llama/Llama-Guard-3-8B",
help="Specify the safety shield to use for testing", help="Specify the safety shield to use for testing",
) )

View file

@ -83,6 +83,6 @@ async def agents_stack(request, inference_model, safety_shield):
) )
for model in inference_models for model in inference_models
], ],
shields=[safety_shield], shields=[safety_shield] if safety_shield else [],
) )
return test_stack return test_stack

View file

@ -63,7 +63,7 @@ def pytest_addoption(parser):
parser.addoption( parser.addoption(
"--inference-model", "--inference-model",
action="store", action="store",
default="Llama3.2-3B-Instruct", default="meta-llama/Llama-3.2-3B-Instruct",
help="Specify the inference model to use for testing", help="Specify the inference model to use for testing",
) )

View file

@ -32,8 +32,12 @@ def pytest_configure(config):
MODEL_PARAMS = [ MODEL_PARAMS = [
pytest.param("Llama3.1-8B-Instruct", marks=pytest.mark.llama_8b, id="llama_8b"), pytest.param(
pytest.param("Llama3.2-3B-Instruct", marks=pytest.mark.llama_3b, id="llama_3b"), "meta-llama/Llama-3.1-8B-Instruct", marks=pytest.mark.llama_8b, id="llama_8b"
),
pytest.param(
"meta-llama/Llama-3.2-3B-Instruct", marks=pytest.mark.llama_3b, id="llama_3b"
),
] ]
VISION_MODEL_PARAMS = [ VISION_MODEL_PARAMS = [

View file

@ -6,7 +6,6 @@
import pytest import pytest
from llama_models.datatypes import CoreModelId
# How to run this test: # How to run this test:
# #
@ -17,11 +16,22 @@ from llama_models.datatypes import CoreModelId
class TestModelRegistration: class TestModelRegistration:
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_register_unsupported_model(self, inference_stack): async def test_register_unsupported_model(self, inference_stack, inference_model):
_, models_impl = inference_stack inference_impl, models_impl = inference_stack
provider = inference_impl.routing_table.get_provider_impl(inference_model)
if provider.__provider_spec__.provider_type not in (
"meta-reference",
"remote::ollama",
"remote::vllm",
"remote::tgi",
):
pytest.skip(
"Skipping test for remote inference providers since they can handle large models like 70B instruct"
)
# Try to register a model that's too large for local inference # Try to register a model that's too large for local inference
with pytest.raises(Exception) as exc_info: with pytest.raises(ValueError) as exc_info:
await models_impl.register_model( await models_impl.register_model(
model_id="Llama3.1-70B-Instruct", model_id="Llama3.1-70B-Instruct",
) )
@ -37,21 +47,27 @@ class TestModelRegistration:
) )
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_update_model(self, inference_stack): async def test_register_with_llama_model(self, inference_stack):
_, models_impl = inference_stack _, models_impl = inference_stack
# Register a model to update _ = await models_impl.register_model(
model_id = CoreModelId.llama3_1_8b_instruct.value model_id="custom-model",
old_model = await models_impl.register_model(model_id=model_id) metadata={"llama_model": "meta-llama/Llama-2-7b"},
# Update the model
new_model_id = CoreModelId.llama3_2_3b_instruct.value
updated_model = await models_impl.update_model(
model_id=model_id, provider_model_id=new_model_id
) )
# Retrieve the updated model to verify changes with pytest.raises(ValueError) as exc_info:
assert updated_model.provider_resource_id != old_model.provider_resource_id await models_impl.register_model(
model_id="custom-model-2",
metadata={"llama_model": "meta-llama/Llama-2-7b"},
provider_model_id="custom-model",
)
# Cleanup @pytest.mark.asyncio
await models_impl.unregister_model(model_id=model_id) async def test_register_with_invalid_llama_model(self, inference_stack):
_, models_impl = inference_stack
with pytest.raises(ValueError) as exc_info:
await models_impl.register_model(
model_id="custom-model-2",
metadata={"llama_model": "invalid-llama-model"},
)

View file

@ -6,7 +6,6 @@
import json import json
import tempfile import tempfile
from datetime import datetime
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
from llama_stack.distribution.datatypes import * # noqa: F403 from llama_stack.distribution.datatypes import * # noqa: F403
@ -37,7 +36,6 @@ async def construct_stack_for_test(
) -> TestStack: ) -> TestStack:
sqlite_file = tempfile.NamedTemporaryFile(delete=False, suffix=".db") sqlite_file = tempfile.NamedTemporaryFile(delete=False, suffix=".db")
run_config = dict( run_config = dict(
built_at=datetime.now(),
image_name="test-fixture", image_name="test-fixture",
apis=apis, apis=apis,
providers=providers, providers=providers,

View file

@ -47,6 +47,9 @@ def safety_shield(request):
else: else:
params = {} params = {}
if not shield_id:
return None
return ShieldInput( return ShieldInput(
shield_id=shield_id, shield_id=shield_id,
params=params, params=params,

View file

@ -58,7 +58,7 @@ def pytest_addoption(parser):
parser.addoption( parser.addoption(
"--inference-model", "--inference-model",
action="store", action="store",
default="Llama3.2-3B-Instruct", default="meta-llama/Llama-3.2-3B-Instruct",
help="Specify the inference model to use for testing", help="Specify the inference model to use for testing",
) )

View file

@ -31,3 +31,8 @@ def supported_inference_models() -> List[str]:
or is_supported_safety_model(m) or is_supported_safety_model(m)
) )
] ]
ALL_HUGGINGFACE_REPOS_TO_MODEL_DESCRIPTOR = {
m.huggingface_repo: m.descriptor() for m in all_registered_models()
}

View file

@ -11,6 +11,10 @@ from llama_models.sku_list import all_registered_models
from llama_stack.providers.datatypes import Model, ModelsProtocolPrivate from llama_stack.providers.datatypes import Model, ModelsProtocolPrivate
from llama_stack.providers.utils.inference import (
ALL_HUGGINGFACE_REPOS_TO_MODEL_DESCRIPTOR,
)
ModelAlias = namedtuple("ModelAlias", ["provider_model_id", "aliases", "llama_model"]) ModelAlias = namedtuple("ModelAlias", ["provider_model_id", "aliases", "llama_model"])
@ -32,6 +36,16 @@ def build_model_alias(provider_model_id: str, model_descriptor: str) -> ModelAli
) )
def build_model_alias_with_just_provider_model_id(
provider_model_id: str, model_descriptor: str
) -> ModelAlias:
return ModelAlias(
provider_model_id=provider_model_id,
aliases=[],
llama_model=model_descriptor,
)
class ModelRegistryHelper(ModelsProtocolPrivate): class ModelRegistryHelper(ModelsProtocolPrivate):
def __init__(self, model_aliases: List[ModelAlias]): def __init__(self, model_aliases: List[ModelAlias]):
self.alias_to_provider_id_map = {} self.alias_to_provider_id_map = {}
@ -51,7 +65,7 @@ class ModelRegistryHelper(ModelsProtocolPrivate):
if identifier in self.alias_to_provider_id_map: if identifier in self.alias_to_provider_id_map:
return self.alias_to_provider_id_map[identifier] return self.alias_to_provider_id_map[identifier]
else: else:
raise ValueError(f"Unknown model: `{identifier}`") return None
def get_llama_model(self, provider_model_id: str) -> str: def get_llama_model(self, provider_model_id: str) -> str:
if provider_model_id in self.provider_id_to_llama_model_map: if provider_model_id in self.provider_id_to_llama_model_map:
@ -60,8 +74,34 @@ class ModelRegistryHelper(ModelsProtocolPrivate):
return None return None
async def register_model(self, model: Model) -> Model: async def register_model(self, model: Model) -> Model:
model.provider_resource_id = self.get_provider_model_id( provider_resource_id = self.get_provider_model_id(model.provider_resource_id)
model.provider_resource_id if provider_resource_id:
) model.provider_resource_id = provider_resource_id
else:
if model.metadata.get("llama_model") is None:
raise ValueError(
f"Model '{model.provider_resource_id}' is not available and no llama_model was specified in metadata. "
"Please specify a llama_model in metadata or use a supported model identifier"
)
existing_llama_model = self.get_llama_model(model.provider_resource_id)
if existing_llama_model:
if existing_llama_model != model.metadata["llama_model"]:
raise ValueError(
f"Provider model id '{model.provider_resource_id}' is already registered to a different llama model: '{existing_llama_model}'"
)
else:
if (
model.metadata["llama_model"]
not in ALL_HUGGINGFACE_REPOS_TO_MODEL_DESCRIPTOR
):
raise ValueError(
f"Invalid llama_model '{model.metadata['llama_model']}' specified in metadata. "
f"Must be one of: {', '.join(ALL_HUGGINGFACE_REPOS_TO_MODEL_DESCRIPTOR.keys())}"
)
self.provider_id_to_llama_model_map[model.provider_resource_id] = (
ALL_HUGGINGFACE_REPOS_TO_MODEL_DESCRIPTOR[
model.metadata["llama_model"]
]
)
return model return model

View file

@ -36,6 +36,15 @@ class RedisKVStoreConfig(CommonConfig):
def url(self) -> str: def url(self) -> str:
return f"redis://{self.host}:{self.port}" return f"redis://{self.host}:{self.port}"
@classmethod
def sample_run_config(cls):
return {
"type": "redis",
"namespace": None,
"host": "${env.REDIS_HOST:localhost}",
"port": "${env.REDIS_PORT:6379}",
}
class SqliteKVStoreConfig(CommonConfig): class SqliteKVStoreConfig(CommonConfig):
type: Literal[KVStoreType.sqlite.value] = KVStoreType.sqlite.value type: Literal[KVStoreType.sqlite.value] = KVStoreType.sqlite.value
@ -44,6 +53,19 @@ class SqliteKVStoreConfig(CommonConfig):
description="File path for the sqlite database", description="File path for the sqlite database",
) )
@classmethod
def sample_run_config(
cls, __distro_dir__: str = "runtime", db_name: str = "kvstore.db"
):
return {
"type": "sqlite",
"namespace": None,
"db_path": "${env.SQLITE_STORE_DIR:~/.llama/"
+ __distro_dir__
+ "}/"
+ db_name,
}
class PostgresKVStoreConfig(CommonConfig): class PostgresKVStoreConfig(CommonConfig):
type: Literal[KVStoreType.postgres.value] = KVStoreType.postgres.value type: Literal[KVStoreType.postgres.value] = KVStoreType.postgres.value
@ -54,6 +76,19 @@ class PostgresKVStoreConfig(CommonConfig):
password: Optional[str] = None password: Optional[str] = None
table_name: str = "llamastack_kvstore" table_name: str = "llamastack_kvstore"
@classmethod
def sample_run_config(cls, table_name: str = "llamastack_kvstore"):
return {
"type": "postgres",
"namespace": None,
"host": "${env.POSTGRES_HOST:localhost}",
"port": "${env.POSTGRES_PORT:5432}",
"db": "${env.POSTGRES_DB}",
"user": "${env.POSTGRES_USER}",
"password": "${env.POSTGRES_PASSWORD}",
"table_name": "${env.POSTGRES_TABLE_NAME:" + table_name + "}",
}
@classmethod @classmethod
@field_validator("table_name") @field_validator("table_name")
def validate_table_name(cls, v: str) -> str: def validate_table_name(cls, v: str) -> str:

Some files were not shown because too many files have changed in this diff Show more