llama-stack/llama_stack/providers/utils
Matthew Farrellee 4e6c984c26
add NVIDIA NIM inference adapter (#355)
# What does this PR do?

this PR adds a basic inference adapter to NVIDIA NIMs

what it does -
 - chat completion api
   - tool calls
   - streaming
   - structured output
   - logprobs
 - support hosted NIM on integrate.api.nvidia.com
 - support downloaded NIM containers

what it does not do -
 - completion api
 - embedding api
 - vision models
 - builtin tools
 - have certainty that sampling strategies are correct

## Feature/Issue validation/testing/test plan

`pytest -s -v --providers inference=nvidia
llama_stack/providers/tests/inference/ --env NVIDIA_API_KEY=...`

all tests should pass. there are pydantic v1 warnings.


## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Did you read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
- [x] Did you write any new necessary tests?

Thanks for contributing 🎉!
2024-11-23 15:59:00 -08:00
..
bedrock Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
datasetio [Evals API][11/n] huggingface dataset provider + mmlu scoring fn (#392) 2024-11-11 14:49:50 -05:00
inference add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
kvstore use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00
memory use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00
scoring fix tests after registration migration & rename meta-reference -> basic / llm_as_judge provider (#424) 2024-11-12 10:35:44 -05:00
telemetry Fix opentelemetry adapter (#510) 2024-11-22 18:18:11 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00