llama-stack/llama_stack/providers/remote/inference
Xi Yan b76bef169c
fix nvidia inference provider (#781)
# What does this PR do?

- fixes to nvidia inference provider to account for strategy update
- update nvidia templates

## Test Plan

```
llama stack run ./llama_stack/templates/nvidia/run.yaml --port 5000

LLAMA_STACK_BASE_URL="http://localhost:5000" pytest -v tests/client-sdk/inference/test_inference.py --html=report.html --self-contained-html
```
<img width="1288" alt="image"
src="https://github.com/user-attachments/assets/d20f9aea-525e-47de-a5be-586e022e0d55"
/>

**NOTE**
- vision inference broken
- tool calling broken
- /completion broken

cc @mattf @cdgamarose-nv  for improving NVIDIA inference adapter

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-15 18:49:36 -08:00
..
bedrock Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
cerebras Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
databricks remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
fireworks [Fireworks] Update model name for Fireworks (#753) 2025-01-13 15:53:57 -08:00
groq Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
nvidia fix nvidia inference provider (#781) 2025-01-15 18:49:36 -08:00
ollama remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
sample [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
tgi remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
together remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
vllm remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00