llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
Ben Browning a5827f7cb3 Nvidia provider support for OpenAI API endpoints
This wires up the openai_completion and openai_chat_completion API
methods for the remote Nvidia inference provider, and adds it to the
chat completions part of the OpenAI test suite.

The hosted Nvidia service doesn't actually host any Llama models with
functioning completions and chat completions endpoints, so for now the
test suite only activates the nvidia provider for chat completions.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-10 13:43:28 -04:00
..
__init__.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
config.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
models.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
nvidia.py Nvidia provider support for OpenAI API endpoints 2025-04-10 13:43:28 -04:00
openai_utils.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils.py style: remove prints in codebase (#1146) 2025-02-18 19:41:37 -08:00