llama-stack-mirror/llama_stack/templates
Xi Yan b76bef169c
fix nvidia inference provider (#781)
# What does this PR do?

- fixes to nvidia inference provider to account for strategy update
- update nvidia templates

## Test Plan

```
llama stack run ./llama_stack/templates/nvidia/run.yaml --port 5000

LLAMA_STACK_BASE_URL="http://localhost:5000" pytest -v tests/client-sdk/inference/test_inference.py --html=report.html --self-contained-html
```
<img width="1288" alt="image"
src="https://github.com/user-attachments/assets/d20f9aea-525e-47de-a5be-586e022e0d55"
/>

**NOTE**
- vision inference broken
- tool calling broken
- /completion broken

cc @mattf @cdgamarose-nv  for improving NVIDIA inference adapter

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-15 18:49:36 -08:00
..
bedrock rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
cerebras rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
experimental-post-training add braintrust to experimental-post-training template (#763) 2025-01-14 13:42:59 -08:00
fireworks Fix fireworks run-with-safety template (#766) 2025-01-14 15:28:55 -08:00
hf-endpoint rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
hf-serverless rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
meta-reference-gpu rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
meta-reference-quantized-gpu rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
nvidia fix nvidia inference provider (#781) 2025-01-15 18:49:36 -08:00
ollama Consolidating Safety tests from various places under client-sdk (#699) 2025-01-13 17:46:24 -08:00
remote-vllm Fix issue when generating distros (#755) 2025-01-15 05:34:08 -08:00
tgi rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
together Consolidating Safety tests from various places under client-sdk (#699) 2025-01-13 17:46:24 -08:00
vllm-gpu rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
template.py agents to use tools api (#673) 2025-01-08 19:01:00 -08:00