This gets the fireworks provider passing 100% of our OpenAI API
verification tests when run against a Llama Stack server using the
fireworks provider. Testing against Fireworks directly, without Llama
Stack in the middle, has a lower pass rate.
The main changes are are in how we divert Llama model OpenAI chat
completion requests to the Llama Stack chat completion API (vs
OpenAI), which applies all the client-side formatting necessary to get
tool calls working properly on Fireworks.
A side-effect of this work is any provider using the
OpenAIChatCompletionToLlamaStackMixin (renamed from
OpenAIChatCompletioonUnsupportedMixin) will also get a better
conversion from OpenAI to Llama Stack, including streaming and
non-stream responses.
A small change was required to
`llama_stack/models/llama/llama3/tool_utils.py` to get tests to 100%
because code there was incorrectly assuming any JSON response with a
`name` key was a tool call response. One of our verification tests
produces JSON keys with a `name` key that is not a tool call response,
so I tightened up the logic there to require both a `name` and
`parameters` key in the JSON response before it gets considered a
potential tool call. The `parameters` key was required by the code
anyway, but it wasn't explicitly checking for its existence.
Lastly, this adds some new verification test configs so we can see the
results of using OpenAI APIs against SaaS services directly compared
to hitting Llama Stack with a remote provider pointing at that SaaS
service.
You can run these verification tests like:
```
llama stack run \
--image-type venv \
tests/verifications/openai-api-verification-run.yaml
python tests/verifications/generate_report.py \
--run-tests \
--provider together fireworks openai \
together-llama-stack \
fireworks-llama-stack \
openai-llama-stack
```
Signed-off-by: Ben Browning <bbrownin@redhat.com>
|
||
|---|---|---|
| .. | ||
| conf | ||
| openai_api | ||
| test_results | ||
| __init__.py | ||
| conftest.py | ||
| generate_report.py | ||
| openai-api-verification-run.yaml | ||
| README.md | ||
| REPORT.md | ||
Llama Stack Verifications
Llama Stack Verifications provide standardized test suites to ensure API compatibility and behavior consistency across different LLM providers. These tests help verify that different models and providers implement the expected interfaces and behaviors correctly.
Overview
This framework allows you to run the same set of verification tests against different LLM providers' OpenAI-compatible endpoints (Fireworks, Together, Groq, Cerebras, etc., and OpenAI itself) to ensure they meet the expected behavior and interface standards.
Features
The verification suite currently tests:
- Basic chat completions (streaming and non-streaming)
- Image input capabilities
- Structured JSON output formatting
- Tool calling functionality
Running Tests
To run the verification tests, use pytest with the following parameters:
cd llama-stack
pytest tests/verifications/openai --provider=<provider-name>
Example:
# Run all tests
pytest tests/verifications/openai --provider=together
# Only run tests with Llama 4 models
pytest tests/verifications/openai --provider=together -k 'Llama-4'
Parameters
--provider: The provider name (openai, fireworks, together, groq, cerebras, etc.)--base-url: The base URL for the provider's API (optional - defaults to the standard URL for the specified provider)--api-key: Your API key for the provider (optional - defaults to the standard API_KEY name for the specified provider)
Supported Providers
The verification suite currently supports:
- OpenAI
- Fireworks
- Together
- Groq
- Cerebras
Adding New Test Cases
To add new test cases, create appropriate JSON files in the openai/fixtures/test_cases/ directory following the existing patterns.
Structure
__init__.py- Marks the directory as a Python packageconftest.py- Global pytest configuration and fixturesopenai/- Tests specific to OpenAI-compatible APIsfixtures/- Test fixtures and utilitiesfixtures.py- Provider-specific fixturesload.py- Utilities for loading test casestest_cases/- JSON test case definitions
test_chat_completion.py- Tests for chat completion APIs