mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
Each model known to the system has two identifiers: - the `provider_resource_id` (what the provider calls it) -- e.g., `accounts/fireworks/models/llama-v3p1-8b-instruct` - the `identifier` (`model_id`) under which it is registered and gets routed to the appropriate provider. We have so far used the HuggingFace repo alias as the standardized identifier you can use to refer to the model. So in the above example, we'd use `meta-llama/Llama-3.1-8B-Instruct` as the name under which it gets registered. This makes it convenient for users to refer to these models across providers. However, we forgot to register the _actual_ provider model ID also. You should be able to route via `provider_resource_id` also, of course. This change fixes this (somewhat grave) omission. *Note*: this change is additive -- more aliases work now compared to before. ## Test Plan Run the following for distro=(ollama fireworks together) ``` LLAMA_STACK_CONFIG=$distro \ pytest -s -v tests/client-sdk/inference/test_text_inference.py \ --inference-model=meta-llama/Llama-3.1-8B-Instruct --vision-inference-model="" ```
2.7 KiB
2.7 KiB
NVIDIA Distribution
The llamastack/distribution-nvidia
distribution consists of the following provider configurations.
API | Provider(s) |
---|---|
agents | inline::meta-reference |
datasetio | remote::huggingface , inline::localfs |
eval | inline::meta-reference |
inference | remote::nvidia |
safety | inline::llama-guard |
scoring | inline::basic , inline::llm-as-judge , inline::braintrust |
telemetry | inline::meta-reference |
tool_runtime | remote::brave-search , remote::tavily-search , inline::code-interpreter , inline::rag-runtime , remote::model-context-protocol |
vector_io | inline::faiss |
Environment Variables
The following environment variables can be configured:
LLAMASTACK_PORT
: Port for the Llama Stack distribution server (default:5001
)NVIDIA_API_KEY
: NVIDIA API Key (default: ``)
Models
The following models are available by default:
meta/llama3-8b-instruct (aliases: meta-llama/Llama-3-8B-Instruct)
meta/llama3-70b-instruct (aliases: meta-llama/Llama-3-70B-Instruct)
meta/llama-3.1-8b-instruct (aliases: meta-llama/Llama-3.1-8B-Instruct)
meta/llama-3.1-70b-instruct (aliases: meta-llama/Llama-3.1-70B-Instruct)
meta/llama-3.1-405b-instruct (aliases: meta-llama/Llama-3.1-405B-Instruct-FP8)
meta/llama-3.2-1b-instruct (aliases: meta-llama/Llama-3.2-1B-Instruct)
meta/llama-3.2-3b-instruct (aliases: meta-llama/Llama-3.2-3B-Instruct)
meta/llama-3.2-11b-vision-instruct (aliases: meta-llama/Llama-3.2-11B-Vision-Instruct)
meta/llama-3.2-90b-vision-instruct (aliases: meta-llama/Llama-3.2-90B-Vision-Instruct)
nvidia/llama-3.2-nv-embedqa-1b-v2
nvidia/nv-embedqa-e5-v5
nvidia/nv-embedqa-mistral-7b-v2
snowflake/arctic-embed-l
Prerequisite: API Keys
Make sure you have access to a NVIDIA API Key. You can get one by visiting https://build.nvidia.com/.
Running Llama Stack with NVIDIA
You can do this via Conda (build code) or Docker which has a pre-built image.
Via Docker
This method allows you to get started quickly without having to build the distribution code.
LLAMA_STACK_PORT=5001
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run.yaml:/root/my-run.yaml \
llamastack/distribution-nvidia \
--yaml-config /root/my-run.yaml \
--port $LLAMA_STACK_PORT \
--env NVIDIA_API_KEY=$NVIDIA_API_KEY
Via Conda
llama stack build --template nvidia --image-type conda
llama stack run ./run.yaml \
--port 5001 \
--env NVIDIA_API_KEY=$NVIDIA_API_KEY
--env INFERENCE_MODEL=$INFERENCE_MODEL