llama-stack/llama_stack
Xi Yan b76bef169c
fix nvidia inference provider (#781)
# What does this PR do?

- fixes to nvidia inference provider to account for strategy update
- update nvidia templates

## Test Plan

```
llama stack run ./llama_stack/templates/nvidia/run.yaml --port 5000

LLAMA_STACK_BASE_URL="http://localhost:5000" pytest -v tests/client-sdk/inference/test_inference.py --html=report.html --self-contained-html
```
<img width="1288" alt="image"
src="https://github.com/user-attachments/assets/d20f9aea-525e-47de-a5be-586e022e0d55"
/>

**NOTE**
- vision inference broken
- tool calling broken
- /completion broken

cc @mattf @cdgamarose-nv  for improving NVIDIA inference adapter

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-15 18:49:36 -08:00
..
apis More idiomatic REST API (#765) 2025-01-15 13:20:09 -08:00
cli [CICD] Github workflow for publishing Docker images (#764) 2025-01-15 09:01:33 -08:00
distribution fix routing in library client (#776) 2025-01-15 15:59:45 -08:00
providers fix nvidia inference provider (#781) 2025-01-15 18:49:36 -08:00
scripts Fix to conda env build script 2024-12-17 12:19:34 -08:00
templates fix nvidia inference provider (#781) 2025-01-15 18:49:36 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00