# What does this PR do? When clients called the Open AI API with invalid input that wasn't caught by our own Pydantic API validation but instead only caught by the backend inference provider, that backend inference provider was returning a HTTP 400 error. However, we were wrapping that into a HTTP 500 error, obfuscating the actual issue from calling clients and triggering OpenAI client retry logic. This change adjusts our existing `translate_exception` method in `server.py` to wrap `openai.BadRequestError` as HTTP 400 errors, passing through the string representation of the error message to the calling user so they can see the actual input validation error and correct it. I tried changing this in a few other places, but ultimately `translate_exception` was the only real place to handle this for both streaming and non-streaming requests across all inference providers that use the OpenAI server APIs. This also tightens up our validation a bit for the OpenAI chat completions API, to catch empty `messages` parameters, invalid `tool_choice` parameters, invalid `tools` items, or passing `tool_choice` when `tools` isn't given. Lastly, this extends our OpenAI API chat completions verifications to also check for consistent input validation across providers. Providers behind Llama Stack should automatically pass all the new tests due to the input validation added here, but some of the providers fail this test when not run behind Llama Stack due to differences in how they handle input validation and errors. (Closes #1951) ## Test Plan To test this, start an OpenAI API verification stack: ``` llama stack run --image-type venv tests/verifications/openai-api-verification-run.yaml ``` Then, run the new verification tests with your provider(s) of choice: ``` python -m pytest -s -v \ tests/verifications/openai_api/test_chat_completion.py \ --provider openai-llama-stack python -m pytest -s -v \ tests/verifications/openai_api/test_chat_completion.py \ --provider together-llama-stack ``` Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
---|---|---|
.. | ||
conf | ||
openai_api | ||
test_results | ||
__init__.py | ||
conftest.py | ||
generate_report.py | ||
openai-api-verification-run.yaml | ||
README.md | ||
REPORT.md |
Llama Stack Verifications
Llama Stack Verifications provide standardized test suites to ensure API compatibility and behavior consistency across different LLM providers. These tests help verify that different models and providers implement the expected interfaces and behaviors correctly.
Overview
This framework allows you to run the same set of verification tests against different LLM providers' OpenAI-compatible endpoints (Fireworks, Together, Groq, Cerebras, etc., and OpenAI itself) to ensure they meet the expected behavior and interface standards.
Features
The verification suite currently tests the following in both streaming and non-streaming modes:
- Basic chat completions
- Image input capabilities
- Structured JSON output formatting
- Tool calling functionality
Report
The lastest report can be found at REPORT.md.
To update the report, ensure you have the API keys set,
export OPENAI_API_KEY=<your_openai_api_key>
export FIREWORKS_API_KEY=<your_fireworks_api_key>
export TOGETHER_API_KEY=<your_together_api_key>
then run
uv run --with-editable ".[dev]" python tests/verifications/generate_report.py --run-tests
Running Tests
To run the verification tests, use pytest with the following parameters:
cd llama-stack
pytest tests/verifications/openai_api --provider=<provider-name>
Example:
# Run all tests
pytest tests/verifications/openai_api --provider=together
# Only run tests with Llama 4 models
pytest tests/verifications/openai_api --provider=together -k 'Llama-4'
Parameters
--provider
: The provider name (openai, fireworks, together, groq, cerebras, etc.)--base-url
: The base URL for the provider's API (optional - defaults to the standard URL for the specified provider)--api-key
: Your API key for the provider (optional - defaults to the standard API_KEY name for the specified provider)
Supported Providers
The verification suite supports any provider with an OpenAI compatible endpoint.
See tests/verifications/conf/
for the list of supported providers.
To run on a new provider, simply add a new yaml file to the conf/
directory with the provider config. See tests/verifications/conf/together.yaml
for an example.
Adding New Test Cases
To add new test cases, create appropriate JSON files in the openai_api/fixtures/test_cases/
directory following the existing patterns.
Structure
__init__.py
- Marks the directory as a Python packageconf/
- Provider-specific configuration filesopenai_api/
- Tests specific to OpenAI-compatible APIsfixtures/
- Test fixtures and utilitiesfixtures.py
- Provider-specific fixturesload.py
- Utilities for loading test casestest_cases/
- JSON test case definitions
test_chat_completion.py
- Tests for chat completion APIs