llama-stack-mirror/tests/verifications
Francisco Arceo de6ec5803e
fix: Fix linter failures from #1921 (#1932)
# What does this PR do?
fix: Fix linter failures from #1921

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-04-10 10:37:31 -07:00
..
conf fix: Fix linter failures from #1921 (#1932) 2025-04-10 10:37:31 -07:00
openai_api feat(verification): various improvements (#1921) 2025-04-10 10:26:19 -07:00
test_results feat(verification): various improvements (#1921) 2025-04-10 10:26:19 -07:00
__init__.py feat: adds test suite to verify provider's OAI compat endpoints (#1901) 2025-04-08 21:21:38 -07:00
conftest.py feat(verification): various improvements (#1921) 2025-04-10 10:26:19 -07:00
generate_report.py feat(verification): various improvements (#1921) 2025-04-10 10:26:19 -07:00
README.md feat: adds test suite to verify provider's OAI compat endpoints (#1901) 2025-04-08 21:21:38 -07:00
REPORT.md feat(verification): various improvements (#1921) 2025-04-10 10:26:19 -07:00

Llama Stack Verifications

Llama Stack Verifications provide standardized test suites to ensure API compatibility and behavior consistency across different LLM providers. These tests help verify that different models and providers implement the expected interfaces and behaviors correctly.

Overview

This framework allows you to run the same set of verification tests against different LLM providers' OpenAI-compatible endpoints (Fireworks, Together, Groq, Cerebras, etc., and OpenAI itself) to ensure they meet the expected behavior and interface standards.

Features

The verification suite currently tests:

  • Basic chat completions (streaming and non-streaming)
  • Image input capabilities
  • Structured JSON output formatting
  • Tool calling functionality

Running Tests

To run the verification tests, use pytest with the following parameters:

cd llama-stack
pytest tests/verifications/openai --provider=<provider-name>

Example:

# Run all tests
pytest tests/verifications/openai --provider=together

# Only run tests with Llama 4 models
pytest tests/verifications/openai --provider=together -k 'Llama-4'

Parameters

  • --provider: The provider name (openai, fireworks, together, groq, cerebras, etc.)
  • --base-url: The base URL for the provider's API (optional - defaults to the standard URL for the specified provider)
  • --api-key: Your API key for the provider (optional - defaults to the standard API_KEY name for the specified provider)

Supported Providers

The verification suite currently supports:

  • OpenAI
  • Fireworks
  • Together
  • Groq
  • Cerebras

Adding New Test Cases

To add new test cases, create appropriate JSON files in the openai/fixtures/test_cases/ directory following the existing patterns.

Structure

  • __init__.py - Marks the directory as a Python package
  • conftest.py - Global pytest configuration and fixtures
  • openai/ - Tests specific to OpenAI-compatible APIs
    • fixtures/ - Test fixtures and utilities
      • fixtures.py - Provider-specific fixtures
      • load.py - Utilities for loading test cases
      • test_cases/ - JSON test case definitions
    • test_chat_completion.py - Tests for chat completion APIs