# What does this PR do? This is a combination of what was previously 3 separate PRs - #2069, #2075, and #2083. It turns out all 3 of those are needed to land a working function calling Responses implementation. The web search builtin tool was already working, but this wires in support for custom function calling. I ended up combining all three into one PR because they all had lots of merge conflicts, both with each other but also with #1806 that just landed. And, because landing any of them individually would have only left a partially working implementation merged. The new things added here are: * Storing of input items from previous responses and restoring of those input items when adding previous responses to the conversation state * Handling of multiple input item messages roles, not just "user" messages. * Support for custom tools passed into the Responses API to enable function calling outside of just the builtin websearch tool. Closes #2074 Closes #2080 ## Test Plan ### Unit Tests Several new unit tests were added, and they all pass. Ran via: ``` python -m pytest -s -v tests/unit/providers/agents/meta_reference/test_openai_responses.py ``` ### Responses API Verification Tests I ran our verification run.yaml against multiple providers to ensure we were getting a decent pass rate. Specifically, I ensured the new custom tool verification test passed across multiple providers and that the multi-turn examples passed across at least some of the providers (some providers struggle with the multi-turn workflows still). Running the stack setup for verification testing: ``` llama stack run --image-type venv tests/verifications/openai-api-verification-run.yaml ``` Together, passing 100% as an example: ``` pytest -s -v 'tests/verifications/openai_api/test_responses.py' --provider=together-llama-stack ``` ## Documentation We will need to start documenting the OpenAI APIs, but for now the Responses stuff is still rapidly evolving so delaying that. --------- Signed-off-by: Derek Higgins <derekh@redhat.com> Signed-off-by: Ben Browning <bbrownin@redhat.com> Co-authored-by: Derek Higgins <derekh@redhat.com> Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com> |
||
---|---|---|
.. | ||
conf | ||
openai_api | ||
test_results | ||
__init__.py | ||
conftest.py | ||
generate_report.py | ||
openai-api-verification-run.yaml | ||
README.md | ||
REPORT.md |
Llama Stack Verifications
Llama Stack Verifications provide standardized test suites to ensure API compatibility and behavior consistency across different LLM providers. These tests help verify that different models and providers implement the expected interfaces and behaviors correctly.
Overview
This framework allows you to run the same set of verification tests against different LLM providers' OpenAI-compatible endpoints (Fireworks, Together, Groq, Cerebras, etc., and OpenAI itself) to ensure they meet the expected behavior and interface standards.
Features
The verification suite currently tests the following in both streaming and non-streaming modes:
- Basic chat completions
- Image input capabilities
- Structured JSON output formatting
- Tool calling functionality
Report
The lastest report can be found at REPORT.md.
To update the report, ensure you have the API keys set,
export OPENAI_API_KEY=<your_openai_api_key>
export FIREWORKS_API_KEY=<your_fireworks_api_key>
export TOGETHER_API_KEY=<your_together_api_key>
then run
uv run --with-editable ".[dev]" python tests/verifications/generate_report.py --run-tests
Running Tests
To run the verification tests, use pytest with the following parameters:
cd llama-stack
pytest tests/verifications/openai_api --provider=<provider-name>
Example:
# Run all tests
pytest tests/verifications/openai_api --provider=together
# Only run tests with Llama 4 models
pytest tests/verifications/openai_api --provider=together -k 'Llama-4'
Parameters
--provider
: The provider name (openai, fireworks, together, groq, cerebras, etc.)--base-url
: The base URL for the provider's API (optional - defaults to the standard URL for the specified provider)--api-key
: Your API key for the provider (optional - defaults to the standard API_KEY name for the specified provider)
Supported Providers
The verification suite supports any provider with an OpenAI compatible endpoint.
See tests/verifications/conf/
for the list of supported providers.
To run on a new provider, simply add a new yaml file to the conf/
directory with the provider config. See tests/verifications/conf/together.yaml
for an example.
Adding New Test Cases
To add new test cases, create appropriate JSON files in the openai_api/fixtures/test_cases/
directory following the existing patterns.
Structure
__init__.py
- Marks the directory as a Python packageconf/
- Provider-specific configuration filesopenai_api/
- Tests specific to OpenAI-compatible APIsfixtures/
- Test fixtures and utilitiesfixtures.py
- Provider-specific fixturesload.py
- Utilities for loading test casestest_cases/
- JSON test case definitions
test_chat_completion.py
- Tests for chat completion APIs