llama-stack/docs/source/distributions/remote_hosted_distro/nvidia.md
Matthew Farrellee 832c535aaf
feat(providers): add NVIDIA Inference embedding provider and tests (#935)
# What does this PR do?

add /v1/inference/embeddings implementation to NVIDIA provider

**open topics** -
- *asymmetric models*. NeMo Retriever includes asymmetric models, which
are models that embed differently depending on if the input is destined
for storage or lookup against storage. the /v1/inference/embeddings api
does not allow the user to indicate the type of embedding to perform.
see https://github.com/meta-llama/llama-stack/issues/934
- *truncation*. embedding models typically have a limited context
window, e.g. 1024 tokens is common though newer models have 8k windows.
when the input is larger than this window the endpoint cannot perform
its designed function. two options: 0. return an error so the user can
reduce the input size and retry; 1. perform truncation for the user and
proceed (common strategies are left or right truncation). many users
encounter context window size limits and will struggle to write reliable
programs. this struggle is especially acute without access to the
model's tokenizer. the /v1/inference/embeddings api does not allow the
user to delegate truncation policy. see
https://github.com/meta-llama/llama-stack/issues/933
- *dimensions*. "Matryoshka" embedding models are available. they allow
users to control the number of embedding dimensions the model produces.
this is a critical feature for managing storage constraints. embeddings
of 1024 dimensions what achieve 95% recall for an application may not be
worth the storage cost if a 512 dimensions can achieve 93% recall.
controlling embedding dimensions allows applications to determine their
recall and storage tradeoffs. the /v1/inference/embeddings api does not
allow the user to control the output dimensions. see
https://github.com/meta-llama/llama-stack/issues/932

## Test Plan

- `llama stack run llama_stack/templates/nvidia/run.yaml`
- `LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v
tests/client-sdk/inference/test_embedding.py --embedding-model
baai/bge-m3`


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-02-20 16:59:48 -08:00

2.5 KiB

NVIDIA Distribution

The llamastack/distribution-nvidia distribution consists of the following provider configurations.

API Provider(s)
agents inline::meta-reference
datasetio remote::huggingface, inline::localfs
eval inline::meta-reference
inference remote::nvidia
safety inline::llama-guard
scoring inline::basic, inline::llm-as-judge, inline::braintrust
telemetry inline::meta-reference
tool_runtime remote::brave-search, remote::tavily-search, inline::code-interpreter, inline::rag-runtime, remote::model-context-protocol
vector_io inline::faiss

Environment Variables

The following environment variables can be configured:

  • LLAMASTACK_PORT: Port for the Llama Stack distribution server (default: 5001)
  • NVIDIA_API_KEY: NVIDIA API Key (default: ``)

Models

The following models are available by default:

  • meta-llama/Llama-3-8B-Instruct (meta/llama3-8b-instruct)
  • meta-llama/Llama-3-70B-Instruct (meta/llama3-70b-instruct)
  • meta-llama/Llama-3.1-8B-Instruct (meta/llama-3.1-8b-instruct)
  • meta-llama/Llama-3.1-70B-Instruct (meta/llama-3.1-70b-instruct)
  • meta-llama/Llama-3.1-405B-Instruct-FP8 (meta/llama-3.1-405b-instruct)
  • meta-llama/Llama-3.2-1B-Instruct (meta/llama-3.2-1b-instruct)
  • meta-llama/Llama-3.2-3B-Instruct (meta/llama-3.2-3b-instruct)
  • meta-llama/Llama-3.2-11B-Vision-Instruct (meta/llama-3.2-11b-vision-instruct)
  • meta-llama/Llama-3.2-90B-Vision-Instruct (meta/llama-3.2-90b-vision-instruct)
  • baai/bge-m3 (baai/bge-m3)

Prerequisite: API Keys

Make sure you have access to a NVIDIA API Key. You can get one by visiting https://build.nvidia.com/.

Running Llama Stack with NVIDIA

You can do this via Conda (build code) or Docker which has a pre-built image.

Via Docker

This method allows you to get started quickly without having to build the distribution code.

LLAMA_STACK_PORT=5001
docker run \
  -it \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  -v ./run.yaml:/root/my-run.yaml \
  llamastack/distribution-nvidia \
  --yaml-config /root/my-run.yaml \
  --port $LLAMA_STACK_PORT \
  --env NVIDIA_API_KEY=$NVIDIA_API_KEY

Via Conda

llama stack build --template nvidia --image-type conda
llama stack run ./run.yaml \
  --port 5001 \
  --env NVIDIA_API_KEY=$NVIDIA_API_KEY
  --env INFERENCE_MODEL=$INFERENCE_MODEL