llama-stack-mirror/tests/integration
2025-08-14 21:20:19 -07:00
..
agents chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
datasets fix: test_datasets HF scenario in CI (#2090) 2025-05-06 14:09:15 +02:00
eval fix: fix jobs api literal return type (#1757) 2025-03-21 14:04:21 -07:00
files chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
fixtures chore(tests): fix responses and vector_io tests (#3119) 2025-08-12 16:15:53 -07:00
inference feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
inspect chore: default to pytest asyncio-mode=auto (#2730) 2025-07-11 13:00:24 -07:00
non_ci/responses refactor(tests): make the responses tests nicer (#3161) 2025-08-15 00:05:36 +00:00
post_training fix: Post Training Model change in Tests in order to make it less intensive (#2991) 2025-07-31 11:22:34 -07:00
providers fix(ci, nvidia): do not use module level pytest skip for now 2025-07-31 12:32:31 -07:00
recordings fix(tests): record missing tests for test_responses_store (#3163) 2025-08-15 03:52:45 +00:00
safety feat: Add moderations create api (#3020) 2025-08-06 13:51:23 -07:00
scoring feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
telemetry fix: telemetry fixes (inference and core telemetry) (#2733) 2025-08-06 13:37:40 -07:00
test_cases feat: switch to async completion in LiteLLM OpenAI mixin (#3029) 2025-08-03 12:08:56 -07:00
tool_runtime refactor: introduce common 'ResourceNotFoundError' exception (#3032) 2025-08-06 10:22:55 -07:00
tools fix: toolgroups unregister (#1704) 2025-03-19 13:43:51 -07:00
vector_io chore(tests): fix responses and vector_io tests (#3119) 2025-08-12 16:15:53 -07:00
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
conftest.py fix(tests): move llama stack client init back to fixture (#3071) 2025-08-07 15:29:53 -07:00
README.md rewrote all slop 2025-08-14 16:51:13 -07:00

Integration Testing Guide

Integration tests verify complete workflows across different providers using Llama Stack's record-replay system.

Quick Start

# Run all integration tests with existing recordings
uv run pytest tests/integration/

# Test against live APIs with auto-server
export FIREWORKS_API_KEY=your_key
pytest tests/integration/inference/ \
    --stack-config=server:fireworks \
    --text-model=meta-llama/Llama-3.1-8B-Instruct

Configuration Options

You can see all options with:

cd tests/integration

# this will show a long list of options, look for "Custom options:"
pytest --help

Here are the most important options:

  • --stack-config: specify the stack config to use. You have four ways to point to a stack:
    • server:<config> - automatically start a server with the given config (e.g., server:fireworks). This provides one-step testing by auto-starting the server if the port is available, or reusing an existing server if already running.
    • server:<config>:<port> - same as above but with a custom port (e.g., server:together:8322)
    • a URL which points to a Llama Stack distribution server
    • a template (e.g., starter) or a path to a run.yaml file
    • a comma-separated list of api=provider pairs, e.g. inference=fireworks,safety=llama-guard,agents=meta-reference. This is most useful for testing a single API surface.
  • --env: set environment variables, e.g. --env KEY=value. this is a utility option to set environment variables required by various providers.

Model parameters can be influenced by the following options:

  • --text-model: comma-separated list of text models.
  • --vision-model: comma-separated list of vision models.
  • --embedding-model: comma-separated list of embedding models.
  • --safety-shield: comma-separated list of safety shields.
  • --judge-model: comma-separated list of judge models.
  • --embedding-dimension: output dimensionality of the embedding model to use for testing. Default: 384

Each of these are comma-separated lists and can be used to generate multiple parameter combinations. Note that tests will be skipped if no model is specified.

Examples

Testing against a Server

Run all text inference tests by auto-starting a server with the fireworks config:

pytest -s -v tests/integration/inference/test_text_inference.py \
   --stack-config=server:fireworks \
   --text-model=meta-llama/Llama-3.1-8B-Instruct

Run tests with auto-server startup on a custom port:

pytest -s -v tests/integration/inference/ \
   --stack-config=server:together:8322 \
   --text-model=meta-llama/Llama-3.1-8B-Instruct

Run multiple test suites with auto-server (eliminates manual server management):

# Auto-start server and run all integration tests
export FIREWORKS_API_KEY=<your_key>

pytest -s -v tests/integration/inference/ tests/integration/safety/ tests/integration/agents/ \
   --stack-config=server:fireworks \
   --text-model=meta-llama/Llama-3.1-8B-Instruct

Testing with Library Client

Run all text inference tests with the starter distribution using the together provider:

ENABLE_TOGETHER=together pytest -s -v tests/integration/inference/test_text_inference.py \
   --stack-config=starter \
   --text-model=meta-llama/Llama-3.1-8B-Instruct

Run all text inference tests with the starter distribution using the together provider and meta-llama/Llama-3.1-8B-Instruct:

ENABLE_TOGETHER=together pytest -s -v tests/integration/inference/test_text_inference.py \
   --stack-config=starter \
   --text-model=meta-llama/Llama-3.1-8B-Instruct

Running all inference tests for a number of models using the together provider:

TEXT_MODELS=meta-llama/Llama-3.1-8B-Instruct,meta-llama/Llama-3.1-70B-Instruct
VISION_MODELS=meta-llama/Llama-3.2-11B-Vision-Instruct
EMBEDDING_MODELS=all-MiniLM-L6-v2
ENABLE_TOGETHER=together
export TOGETHER_API_KEY=<together_api_key>

pytest -s -v tests/integration/inference/ \
   --stack-config=together \
   --text-model=$TEXT_MODELS \
   --vision-model=$VISION_MODELS \
   --embedding-model=$EMBEDDING_MODELS

Same thing but instead of using the distribution, use an adhoc stack with just one provider (fireworks for inference):

export FIREWORKS_API_KEY=<fireworks_api_key>

pytest -s -v tests/integration/inference/ \
   --stack-config=inference=fireworks \
   --text-model=$TEXT_MODELS \
   --vision-model=$VISION_MODELS \
   --embedding-model=$EMBEDDING_MODELS

Running Vector IO tests for a number of embedding models:

EMBEDDING_MODELS=all-MiniLM-L6-v2

pytest -s -v tests/integration/vector_io/ \
   --stack-config=inference=sentence-transformers,vector_io=sqlite-vec \
   --embedding-model=$EMBEDDING_MODELS

Recording Modes

The testing system supports three modes controlled by environment variables:

LIVE Mode (Default)

Tests make real API calls:

LLAMA_STACK_TEST_INFERENCE_MODE=live pytest tests/integration/

RECORD Mode

Captures API interactions for later replay:

LLAMA_STACK_TEST_INFERENCE_MODE=record \
LLAMA_STACK_TEST_RECORDING_DIR=./recordings \
pytest tests/integration/inference/test_new_feature.py

REPLAY Mode

Uses cached responses instead of making API calls:

LLAMA_STACK_TEST_INFERENCE_MODE=replay \
LLAMA_STACK_TEST_RECORDING_DIR=./recordings \
pytest tests/integration/

Managing Recordings

Viewing Recordings

# See what's recorded
sqlite3 recordings/index.sqlite "SELECT endpoint, model, timestamp FROM recordings;"

# Inspect specific response
cat recordings/responses/abc123.json | jq '.'

Re-recording Tests

# Re-record specific tests
rm -rf recordings/
LLAMA_STACK_TEST_INFERENCE_MODE=record pytest tests/integration/test_modified.py

Writing Tests

Basic Test Pattern

def test_basic_completion(llama_stack_client, text_model_id):
    response = llama_stack_client.inference.completion(
        model_id=text_model_id,
        content=CompletionMessage(role="user", content="Hello"),
    )

    # Test structure, not AI output quality
    assert response.completion_message is not None
    assert isinstance(response.completion_message.content, str)
    assert len(response.completion_message.content) > 0

Provider-Specific Tests

def test_asymmetric_embeddings(llama_stack_client, embedding_model_id):
    if embedding_model_id not in MODELS_SUPPORTING_TASK_TYPE:
        pytest.skip(f"Model {embedding_model_id} doesn't support task types")

    query_response = llama_stack_client.inference.embeddings(
        model_id=embedding_model_id,
        contents=["What is machine learning?"],
        task_type="query"
    )

    assert query_response.embeddings is not None

Best Practices

  • Test API contracts, not AI output quality - Focus on response structure, not content
  • Use existing recordings for development - Fast iteration without API costs
  • Record new interactions only when needed - Adding new functionality
  • Test across providers - Ensure compatibility
  • Commit recordings to version control - Deterministic CI builds