llama-stack-mirror/tests
Ben Browning a5827f7cb3 Nvidia provider support for OpenAI API endpoints
This wires up the openai_completion and openai_chat_completion API
methods for the remote Nvidia inference provider, and adds it to the
chat completions part of the OpenAI test suite.

The hosted Nvidia service doesn't actually host any Llama models with
functioning completions and chat completions endpoints, so for now the
test suite only activates the nvidia provider for chat completions.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-10 13:43:28 -04:00
..
client-sdk/post_training feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
external-provider/llama-stack-provider-ollama feat: ability to execute external providers (#1672) 2025-04-09 10:30:41 +02:00
integration Nvidia provider support for OpenAI API endpoints 2025-04-10 13:43:28 -04:00
unit feat: ability to execute external providers (#1672) 2025-04-09 10:30:41 +02:00
verifications feat: adds test suite to verify provider's OAI compat endpoints (#1901) 2025-04-08 21:21:38 -07:00
__init__.py refactor(test): introduce --stack-config and simplify options (#1404) 2025-03-05 17:02:02 -08:00