llama-stack-mirror/tests/integration/inference
Ben Browning a5827f7cb3 Nvidia provider support for OpenAI API endpoints
This wires up the openai_completion and openai_chat_completion API
methods for the remote Nvidia inference provider, and adds it to the
chat completions part of the OpenAI test suite.

The hosted Nvidia service doesn't actually host any Llama models with
functioning completions and chat completions endpoints, so for now the
test suite only activates the nvidia provider for chat completions.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-10 13:43:28 -04:00
..
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
dog.png refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_embedding.py refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_openai_completion.py Nvidia provider support for OpenAI API endpoints 2025-04-10 13:43:28 -04:00
test_text_inference.py test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
test_vision_inference.py test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
vision_test_1.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_2.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_3.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00