llama-stack-mirror/tests/verifications
Ben Browning 8a1c0a1008 Improve groq OpenAI API compatibility
This doesn't get Groq to 100% on the OpenAI API verification tests,
but it does get it to 88.2% when Llama Stack is in the middle,
compared to the 61.8% results for using an OpenAI client against Groq
directly.

The groq provider doesn't use litellm under the covers in its
openai_chat_completion endpoint, and instead directly uses an
AsyncOpenAI client with some special handling to improve conformance
of responses for response_format usage and tool calling.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-13 13:41:52 -04:00
..
conf Improve groq OpenAI API compatibility 2025-04-13 13:41:52 -04:00
openai_api Get fireworks provider to 100% on OpenAI API verification 2025-04-13 13:39:56 -04:00
test_results test(verification): overwrite test result instead of creating new ones (#1934) 2025-04-10 16:59:28 -07:00
__init__.py feat: adds test suite to verify provider's OAI compat endpoints (#1901) 2025-04-08 21:21:38 -07:00
conftest.py feat(verification): various improvements (#1921) 2025-04-10 10:26:19 -07:00
generate_report.py Improve groq OpenAI API compatibility 2025-04-13 13:41:52 -04:00
openai-api-verification-run.yaml Improve groq OpenAI API compatibility 2025-04-13 13:41:52 -04:00
README.md feat: adds test suite to verify provider's OAI compat endpoints (#1901) 2025-04-08 21:21:38 -07:00
REPORT.md test(verification): overwrite test result instead of creating new ones (#1934) 2025-04-10 16:59:28 -07:00

Llama Stack Verifications

Llama Stack Verifications provide standardized test suites to ensure API compatibility and behavior consistency across different LLM providers. These tests help verify that different models and providers implement the expected interfaces and behaviors correctly.

Overview

This framework allows you to run the same set of verification tests against different LLM providers' OpenAI-compatible endpoints (Fireworks, Together, Groq, Cerebras, etc., and OpenAI itself) to ensure they meet the expected behavior and interface standards.

Features

The verification suite currently tests:

  • Basic chat completions (streaming and non-streaming)
  • Image input capabilities
  • Structured JSON output formatting
  • Tool calling functionality

Running Tests

To run the verification tests, use pytest with the following parameters:

cd llama-stack
pytest tests/verifications/openai --provider=<provider-name>

Example:

# Run all tests
pytest tests/verifications/openai --provider=together

# Only run tests with Llama 4 models
pytest tests/verifications/openai --provider=together -k 'Llama-4'

Parameters

  • --provider: The provider name (openai, fireworks, together, groq, cerebras, etc.)
  • --base-url: The base URL for the provider's API (optional - defaults to the standard URL for the specified provider)
  • --api-key: Your API key for the provider (optional - defaults to the standard API_KEY name for the specified provider)

Supported Providers

The verification suite currently supports:

  • OpenAI
  • Fireworks
  • Together
  • Groq
  • Cerebras

Adding New Test Cases

To add new test cases, create appropriate JSON files in the openai/fixtures/test_cases/ directory following the existing patterns.

Structure

  • __init__.py - Marks the directory as a Python package
  • conftest.py - Global pytest configuration and fixtures
  • openai/ - Tests specific to OpenAI-compatible APIs
    • fixtures/ - Test fixtures and utilities
      • fixtures.py - Provider-specific fixtures
      • load.py - Utilities for loading test cases
      • test_cases/ - JSON test case definitions
    • test_chat_completion.py - Tests for chat completion APIs