llama-stack-mirror/docs/source/distributions
Matthew Farrellee 832c535aaf
feat(providers): add NVIDIA Inference embedding provider and tests (#935)
# What does this PR do?

add /v1/inference/embeddings implementation to NVIDIA provider

**open topics** -
- *asymmetric models*. NeMo Retriever includes asymmetric models, which
are models that embed differently depending on if the input is destined
for storage or lookup against storage. the /v1/inference/embeddings api
does not allow the user to indicate the type of embedding to perform.
see https://github.com/meta-llama/llama-stack/issues/934
- *truncation*. embedding models typically have a limited context
window, e.g. 1024 tokens is common though newer models have 8k windows.
when the input is larger than this window the endpoint cannot perform
its designed function. two options: 0. return an error so the user can
reduce the input size and retry; 1. perform truncation for the user and
proceed (common strategies are left or right truncation). many users
encounter context window size limits and will struggle to write reliable
programs. this struggle is especially acute without access to the
model's tokenizer. the /v1/inference/embeddings api does not allow the
user to delegate truncation policy. see
https://github.com/meta-llama/llama-stack/issues/933
- *dimensions*. "Matryoshka" embedding models are available. they allow
users to control the number of embedding dimensions the model produces.
this is a critical feature for managing storage constraints. embeddings
of 1024 dimensions what achieve 95% recall for an application may not be
worth the storage cost if a 512 dimensions can achieve 93% recall.
controlling embedding dimensions allows applications to determine their
recall and storage tradeoffs. the /v1/inference/embeddings api does not
allow the user to control the output dimensions. see
https://github.com/meta-llama/llama-stack/issues/932

## Test Plan

- `llama stack run llama_stack/templates/nvidia/run.yaml`
- `LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v
tests/client-sdk/inference/test_embedding.py --embedding-model
baai/bge-m3`


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-02-20 16:59:48 -08:00
..
ondevice_distro Fixed distro documentation (#852) 2025-01-23 08:19:51 -08:00
remote_hosted_distro feat(providers): add NVIDIA Inference embedding provider and tests (#935) 2025-02-20 16:59:48 -08:00
self_hosted_distro feat: register embedding models for ollama, together, fireworks (#1190) 2025-02-20 15:39:08 -08:00
building_distro.md chore: remove --no-list-templates option (#1121) 2025-02-18 10:13:46 -08:00
configuration.md script for running client sdk tests (#895) 2025-02-19 22:38:06 -08:00
importing_as_library.md Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
index.md Add Kubernetes deployment guide (#899) 2025-02-06 10:28:02 -08:00
kubernetes_deployment.md Add Kubernetes deployment guide (#899) 2025-02-06 10:28:02 -08:00
selection.md docs: miscellaneous small fixes (#961) 2025-02-04 15:31:30 -08:00