llama-stack-mirror/tests/verifications
Ashwin Bharambe b333a3c03a
Some checks failed
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 2s
Integration Tests / test-matrix (http, 3.12, datasets) (push) Failing after 5s
Integration Tests / test-matrix (http, 3.12, scoring) (push) Failing after 5s
Integration Tests / test-matrix (http, 3.13, datasets) (push) Failing after 7s
Integration Tests / test-matrix (http, 3.13, providers) (push) Failing after 7s
Integration Tests / test-matrix (http, 3.12, inspect) (push) Failing after 11s
Integration Tests / test-matrix (http, 3.12, inference) (push) Failing after 14s
Integration Tests / test-matrix (http, 3.12, tool_runtime) (push) Failing after 12s
Integration Tests / test-matrix (http, 3.12, vector_io) (push) Failing after 9s
Integration Tests / test-matrix (http, 3.13, inference) (push) Failing after 12s
Integration Tests / test-matrix (http, 3.12, post_training) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.12, datasets) (push) Failing after 8s
Integration Tests / test-matrix (http, 3.13, inspect) (push) Failing after 9s
Integration Tests / test-matrix (http, 3.13, tool_runtime) (push) Failing after 9s
Integration Tests / test-matrix (http, 3.13, scoring) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.12, agents) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.12, inspect) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.12, post_training) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.12, inference) (push) Failing after 10s
Integration Tests / test-matrix (http, 3.12, agents) (push) Failing after 21s
Integration Tests / test-matrix (http, 3.12, providers) (push) Failing after 19s
Integration Tests / test-matrix (library, 3.12, scoring) (push) Failing after 7s
Integration Tests / test-matrix (http, 3.13, post_training) (push) Failing after 16s
Integration Tests / test-matrix (http, 3.13, agents) (push) Failing after 19s
Integration Tests / test-matrix (http, 3.13, vector_io) (push) Failing after 15s
Vector IO Integration Tests / test-matrix (3.12, inline::sqlite-vec) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.12, providers) (push) Failing after 9s
Vector IO Integration Tests / test-matrix (3.12, remote::chromadb) (push) Failing after 9s
Vector IO Integration Tests / test-matrix (3.12, inline::faiss) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.12, tool_runtime) (push) Failing after 7s
Integration Tests / test-matrix (library, 3.12, vector_io) (push) Failing after 10s
Vector IO Integration Tests / test-matrix (3.12, remote::pgvector) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.13, inference) (push) Failing after 8s
Vector IO Integration Tests / test-matrix (3.13, inline::faiss) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.13, tool_runtime) (push) Failing after 7s
Integration Tests / test-matrix (library, 3.13, agents) (push) Failing after 11s
Integration Tests / test-matrix (library, 3.13, vector_io) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.13, providers) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.13, datasets) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.13, inspect) (push) Failing after 13s
Vector IO Integration Tests / test-matrix (3.13, inline::sqlite-vec) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.13, scoring) (push) Failing after 11s
Vector IO Integration Tests / test-matrix (3.13, remote::chromadb) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.13, post_training) (push) Failing after 8s
Vector IO Integration Tests / test-matrix (3.13, remote::pgvector) (push) Failing after 46s
Python Package Build Test / build (3.12) (push) Failing after 43s
Test External Providers / test-external-providers (venv) (push) Failing after 40s
Python Package Build Test / build (3.13) (push) Failing after 42s
Unit Tests / unit-tests (3.13) (push) Failing after 22s
Unit Tests / unit-tests (3.12) (push) Failing after 25s
Update ReadTheDocs / update-readthedocs (push) Failing after 20s
Pre-commit / pre-commit (push) Successful in 2m13s
fix(ollama): Download remote image URLs for Ollama (#2551)
## What does this PR do?

Ollama does not support remote images. Only local file paths OR base64
inputs are supported. This PR ensures that the Stack downloads remote
images and passes the base64 down to the inference engine.

## Test Plan

Added a test cases for Responses and ran it for both `fireworks` and
`ollama` providers.
2025-06-30 20:36:11 +05:30
..
conf feat: OpenAI Responses API (#1989) 2025-04-28 14:06:00 -07:00
openai_api fix(ollama): Download remote image URLs for Ollama (#2551) 2025-06-30 20:36:11 +05:30
test_results test: add multi_image test (#1972) 2025-04-17 12:51:42 -07:00
__init__.py feat: adds test suite to verify provider's OAI compat endpoints (#1901) 2025-04-08 21:21:38 -07:00
conftest.py feat: allow using llama-stack-library-client from verifications (#2238) 2025-05-23 11:43:41 -07:00
generate_report.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
openai-api-verification-run.yaml feat: add list responses API (#2233) 2025-05-23 13:16:48 -07:00
README.md chore: use groups when running commands (#2298) 2025-05-28 09:13:16 -07:00
REPORT.md test: add multi_image test (#1972) 2025-04-17 12:51:42 -07:00

Llama Stack Verifications

Llama Stack Verifications provide standardized test suites to ensure API compatibility and behavior consistency across different LLM providers. These tests help verify that different models and providers implement the expected interfaces and behaviors correctly.

Overview

This framework allows you to run the same set of verification tests against different LLM providers' OpenAI-compatible endpoints (Fireworks, Together, Groq, Cerebras, etc., and OpenAI itself) to ensure they meet the expected behavior and interface standards.

Features

The verification suite currently tests the following in both streaming and non-streaming modes:

  • Basic chat completions
  • Image input capabilities
  • Structured JSON output formatting
  • Tool calling functionality

Report

The lastest report can be found at REPORT.md.

To update the report, ensure you have the API keys set,

export OPENAI_API_KEY=<your_openai_api_key>
export FIREWORKS_API_KEY=<your_fireworks_api_key>
export TOGETHER_API_KEY=<your_together_api_key>

then run

uv run python tests/verifications/generate_report.py --run-tests

Running Tests

To run the verification tests, use pytest with the following parameters:

cd llama-stack
pytest tests/verifications/openai_api --provider=<provider-name>

Example:

# Run all tests
pytest tests/verifications/openai_api --provider=together

# Only run tests with Llama 4 models
pytest tests/verifications/openai_api --provider=together -k 'Llama-4'

Parameters

  • --provider: The provider name (openai, fireworks, together, groq, cerebras, etc.)
  • --base-url: The base URL for the provider's API (optional - defaults to the standard URL for the specified provider)
  • --api-key: Your API key for the provider (optional - defaults to the standard API_KEY name for the specified provider)

Supported Providers

The verification suite supports any provider with an OpenAI compatible endpoint.

See tests/verifications/conf/ for the list of supported providers.

To run on a new provider, simply add a new yaml file to the conf/ directory with the provider config. See tests/verifications/conf/together.yaml for an example.

Adding New Test Cases

To add new test cases, create appropriate JSON files in the openai_api/fixtures/test_cases/ directory following the existing patterns.

Structure

  • __init__.py - Marks the directory as a Python package
  • conf/ - Provider-specific configuration files
  • openai_api/ - Tests specific to OpenAI-compatible APIs
    • fixtures/ - Test fixtures and utilities
      • fixtures.py - Provider-specific fixtures
      • load.py - Utilities for loading test cases
      • test_cases/ - JSON test case definitions
    • test_chat_completion.py - Tests for chat completion APIs