llama-stack-mirror/llama_stack/providers
Ben Browning a5827f7cb3 Nvidia provider support for OpenAI API endpoints
This wires up the openai_completion and openai_chat_completion API
methods for the remote Nvidia inference provider, and adds it to the
chat completions part of the OpenAI test suite.

The hosted Nvidia service doesn't actually host any Llama models with
functioning completions and chat completions endpoints, so for now the
test suite only activates the nvidia provider for chat completions.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-10 13:43:28 -04:00
..
inline Mark inline vllm as OpenAI unsupported inference 2025-04-09 15:47:02 -04:00
registry test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
remote Nvidia provider support for OpenAI API endpoints 2025-04-10 13:43:28 -04:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils Add prompt_logprobs and guided_choice to OpenAI completions 2025-04-09 15:47:02 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00