llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
Matthew Farrellee 706b4ca651
feat: support nvidia hosted vision models (llama 3.2 11b/90b) (#1278)
# What does this PR do?

support nvidia hosted 3.2 11b/90b vision models. they are not hosted on
the common https://integrate.api.nvidia.com/v1. they are hosted on their
own individual urls.

## Test Plan

`LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -s -v
tests/client-sdk/inference/test_vision_inference.py
--inference-model=meta/llama-3.2-11b-vision-instruct -k image`
2025-03-18 11:54:10 -07:00
..
__init__.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
config.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
models.py fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
nvidia.py feat: support nvidia hosted vision models (llama 3.2 11b/90b) (#1278) 2025-03-18 11:54:10 -07:00
openai_utils.py chore(lint): update Ruff ignores for project conventions and maintainability (#1184) 2025-02-28 09:36:49 -08:00
utils.py style: remove prints in codebase (#1146) 2025-02-18 19:41:37 -08:00