7 KiB
Integration Testing Guide
Integration tests verify complete workflows across different providers using Llama Stack's record-replay system.
Quick Start
# Run all integration tests with existing recordings
uv run pytest tests/integration/
# Test against live APIs with auto-server
export FIREWORKS_API_KEY=your_key
pytest tests/integration/inference/ \
--stack-config=server:fireworks \
--text-model=meta-llama/Llama-3.1-8B-Instruct
Configuration Options
You can see all options with:
cd tests/integration
# this will show a long list of options, look for "Custom options:"
pytest --help
Here are the most important options:
--stack-config
: specify the stack config to use. You have four ways to point to a stack:server:<config>
- automatically start a server with the given config (e.g.,server:fireworks
). This provides one-step testing by auto-starting the server if the port is available, or reusing an existing server if already running.server:<config>:<port>
- same as above but with a custom port (e.g.,server:together:8322
)- a URL which points to a Llama Stack distribution server
- a template (e.g.,
starter
) or a path to arun.yaml
file - a comma-separated list of api=provider pairs, e.g.
inference=fireworks,safety=llama-guard,agents=meta-reference
. This is most useful for testing a single API surface.
--env
: set environment variables, e.g. --env KEY=value. this is a utility option to set environment variables required by various providers.
Model parameters can be influenced by the following options:
--text-model
: comma-separated list of text models.--vision-model
: comma-separated list of vision models.--embedding-model
: comma-separated list of embedding models.--safety-shield
: comma-separated list of safety shields.--judge-model
: comma-separated list of judge models.--embedding-dimension
: output dimensionality of the embedding model to use for testing. Default: 384
Each of these are comma-separated lists and can be used to generate multiple parameter combinations. Note that tests will be skipped if no model is specified.
Examples
Testing against a Server
Run all text inference tests by auto-starting a server with the fireworks
config:
pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=server:fireworks \
--text-model=meta-llama/Llama-3.1-8B-Instruct
Run tests with auto-server startup on a custom port:
pytest -s -v tests/integration/inference/ \
--stack-config=server:together:8322 \
--text-model=meta-llama/Llama-3.1-8B-Instruct
Run multiple test suites with auto-server (eliminates manual server management):
# Auto-start server and run all integration tests
export FIREWORKS_API_KEY=<your_key>
pytest -s -v tests/integration/inference/ tests/integration/safety/ tests/integration/agents/ \
--stack-config=server:fireworks \
--text-model=meta-llama/Llama-3.1-8B-Instruct
Testing with Library Client
Run all text inference tests with the starter
distribution using the together
provider:
ENABLE_TOGETHER=together pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=starter \
--text-model=meta-llama/Llama-3.1-8B-Instruct
Run all text inference tests with the starter
distribution using the together
provider and meta-llama/Llama-3.1-8B-Instruct
:
ENABLE_TOGETHER=together pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=starter \
--text-model=meta-llama/Llama-3.1-8B-Instruct
Running all inference tests for a number of models using the together
provider:
TEXT_MODELS=meta-llama/Llama-3.1-8B-Instruct,meta-llama/Llama-3.1-70B-Instruct
VISION_MODELS=meta-llama/Llama-3.2-11B-Vision-Instruct
EMBEDDING_MODELS=all-MiniLM-L6-v2
ENABLE_TOGETHER=together
export TOGETHER_API_KEY=<together_api_key>
pytest -s -v tests/integration/inference/ \
--stack-config=together \
--text-model=$TEXT_MODELS \
--vision-model=$VISION_MODELS \
--embedding-model=$EMBEDDING_MODELS
Same thing but instead of using the distribution, use an adhoc stack with just one provider (fireworks
for inference):
export FIREWORKS_API_KEY=<fireworks_api_key>
pytest -s -v tests/integration/inference/ \
--stack-config=inference=fireworks \
--text-model=$TEXT_MODELS \
--vision-model=$VISION_MODELS \
--embedding-model=$EMBEDDING_MODELS
Running Vector IO tests for a number of embedding models:
EMBEDDING_MODELS=all-MiniLM-L6-v2
pytest -s -v tests/integration/vector_io/ \
--stack-config=inference=sentence-transformers,vector_io=sqlite-vec \
--embedding-model=$EMBEDDING_MODELS
Recording Modes
The testing system supports three modes controlled by environment variables:
LIVE Mode (Default)
Tests make real API calls:
LLAMA_STACK_TEST_INFERENCE_MODE=live pytest tests/integration/
RECORD Mode
Captures API interactions for later replay:
LLAMA_STACK_TEST_INFERENCE_MODE=record \
LLAMA_STACK_TEST_RECORDING_DIR=./recordings \
pytest tests/integration/inference/test_new_feature.py
REPLAY Mode
Uses cached responses instead of making API calls:
LLAMA_STACK_TEST_INFERENCE_MODE=replay \
LLAMA_STACK_TEST_RECORDING_DIR=./recordings \
pytest tests/integration/
Managing Recordings
Viewing Recordings
# See what's recorded
sqlite3 recordings/index.sqlite "SELECT endpoint, model, timestamp FROM recordings;"
# Inspect specific response
cat recordings/responses/abc123.json | jq '.'
Re-recording Tests
# Re-record specific tests
rm -rf recordings/
LLAMA_STACK_TEST_INFERENCE_MODE=record pytest tests/integration/test_modified.py
Writing Tests
Basic Test Pattern
def test_basic_completion(llama_stack_client, text_model_id):
response = llama_stack_client.inference.completion(
model_id=text_model_id,
content=CompletionMessage(role="user", content="Hello"),
)
# Test structure, not AI output quality
assert response.completion_message is not None
assert isinstance(response.completion_message.content, str)
assert len(response.completion_message.content) > 0
Provider-Specific Tests
def test_asymmetric_embeddings(llama_stack_client, embedding_model_id):
if embedding_model_id not in MODELS_SUPPORTING_TASK_TYPE:
pytest.skip(f"Model {embedding_model_id} doesn't support task types")
query_response = llama_stack_client.inference.embeddings(
model_id=embedding_model_id,
contents=["What is machine learning?"],
task_type="query"
)
assert query_response.embeddings is not None
Best Practices
- Test API contracts, not AI output quality - Focus on response structure, not content
- Use existing recordings for development - Fast iteration without API costs
- Record new interactions only when needed - Adding new functionality
- Test across providers - Ensure compatibility
- Commit recordings to version control - Deterministic CI builds