llama-stack-mirror/tests/integration
Ashwin Bharambe eb07a0f86a
Some checks failed
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 21s
Test Llama Stack Build / build-single-provider (push) Failing after 23s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 28s
Test Llama Stack Build / generate-matrix (push) Successful in 25s
Python Package Build Test / build (3.13) (push) Failing after 25s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 34s
Integration Tests (Replay) / Integration Tests (, , , client=, vision=) (push) Failing after 37s
Test External API and Providers / test-external (venv) (push) Failing after 33s
Unit Tests / unit-tests (3.13) (push) Failing after 33s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 38s
Python Package Build Test / build (3.12) (push) Failing after 1m0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1m4s
Unit Tests / unit-tests (3.12) (push) Failing after 59s
Test Llama Stack Build / build (push) Failing after 50s
Vector IO Integration Tests / test-matrix (push) Failing after 1m48s
UI Tests / ui-tests (22) (push) Successful in 2m12s
Pre-commit / pre-commit (push) Successful in 2m41s
fix(ci, tests): ensure uv environments in CI are kosher, record tests (#3193)
I started this PR trying to unbreak a newly broken test
`test_agent_name`. This test was broken all along but did not show up
because during testing we were pulling the "non-updated" llama stack
client. See this comment:
https://github.com/llamastack/llama-stack/pull/3119#discussion_r2270988205

While fixing this, I encountered a large amount of badness in our CI
workflow definitions.

- We weren't passing `LLAMA_STACK_DIR` or `LLAMA_STACK_CLIENT_DIR`
overrides to `llama stack build` at all in some cases.
- Even when we did, we used `uv run` liberally. The first thing `uv run`
does is "syncs" the project environment. This means, it is going to undo
any mutations we might have done ourselves. But we make many mutations
in our CI runners to these environments. The most important of which is
why `llama stack build` where we install distro dependencies. As a
result, when you tried to run the integration tests, you would see old,
strange versions.


## Test Plan

Re-record using:

```
sh scripts/integration-tests.sh --stack-config ci-tests \
  --provider ollama --test-pattern test_agent_name --inference-mode record
```

Then re-run with `--inference-mode replay`. But: 

Eventually, this test turned out to be quite flaky for telemetry
reasons. I haven't investigated it for now and just disabled it sadly
since we have a release to push out.
2025-08-18 17:02:24 -07:00
..
agents fix(ci, tests): ensure uv environments in CI are kosher, record tests (#3193) 2025-08-18 17:02:24 -07:00
batches feat: add batches API with OpenAI compatibility (with inference replay) (#3162) 2025-08-15 15:34:15 -07:00
datasets fix: test_datasets HF scenario in CI (#2090) 2025-05-06 14:09:15 +02:00
eval fix: fix jobs api literal return type (#1757) 2025-03-21 14:04:21 -07:00
files chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
fixtures chore(tests): fix responses and vector_io tests (#3119) 2025-08-12 16:15:53 -07:00
inference feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
inspect chore: default to pytest asyncio-mode=auto (#2730) 2025-07-11 13:00:24 -07:00
non_ci/responses refactor(tests): make the responses tests nicer (#3161) 2025-08-15 00:05:36 +00:00
post_training fix: Post Training Model change in Tests in order to make it less intensive (#2991) 2025-07-31 11:22:34 -07:00
providers fix(ci, nvidia): do not use module level pytest skip for now 2025-07-31 12:32:31 -07:00
recordings fix(ci, tests): ensure uv environments in CI are kosher, record tests (#3193) 2025-08-18 17:02:24 -07:00
safety feat: Code scanner Provider impl for moderations api (#3100) 2025-08-18 14:15:40 -07:00
scoring feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
telemetry fix: telemetry fixes (inference and core telemetry) (#2733) 2025-08-06 13:37:40 -07:00
test_cases feat: switch to async completion in LiteLLM OpenAI mixin (#3029) 2025-08-03 12:08:56 -07:00
tool_runtime refactor: introduce common 'ResourceNotFoundError' exception (#3032) 2025-08-06 10:22:55 -07:00
tools fix: toolgroups unregister (#1704) 2025-03-19 13:43:51 -07:00
vector_io chore(tests): fix responses and vector_io tests (#3119) 2025-08-12 16:15:53 -07:00
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
conftest.py fix(tests): move llama stack client init back to fixture (#3071) 2025-08-07 15:29:53 -07:00
README.md test(recording): add a script to schedule recording workflow (#3170) 2025-08-15 16:54:34 -07:00

Integration Testing Guide

Integration tests verify complete workflows across different providers using Llama Stack's record-replay system.

Quick Start

# Run all integration tests with existing recordings
LLAMA_STACK_TEST_INFERENCE_MODE=replay \
  LLAMA_STACK_TEST_RECORDING_DIR=tests/integration/recordings \
  uv run --group test \
  pytest -sv tests/integration/ --stack-config=starter

Configuration Options

You can see all options with:

cd tests/integration

# this will show a long list of options, look for "Custom options:"
pytest --help

Here are the most important options:

  • --stack-config: specify the stack config to use. You have four ways to point to a stack:
    • server:<config> - automatically start a server with the given config (e.g., server:starter). This provides one-step testing by auto-starting the server if the port is available, or reusing an existing server if already running.
    • server:<config>:<port> - same as above but with a custom port (e.g., server:starter:8322)
    • a URL which points to a Llama Stack distribution server
    • a distribution name (e.g., starter) or a path to a run.yaml file
    • a comma-separated list of api=provider pairs, e.g. inference=ollama,safety=llama-guard,agents=meta-reference. This is most useful for testing a single API surface.
  • --env: set environment variables, e.g. --env KEY=value. this is a utility option to set environment variables required by various providers.

Model parameters can be influenced by the following options:

  • --text-model: comma-separated list of text models.
  • --vision-model: comma-separated list of vision models.
  • --embedding-model: comma-separated list of embedding models.
  • --safety-shield: comma-separated list of safety shields.
  • --judge-model: comma-separated list of judge models.
  • --embedding-dimension: output dimensionality of the embedding model to use for testing. Default: 384

Each of these are comma-separated lists and can be used to generate multiple parameter combinations. Note that tests will be skipped if no model is specified.

Examples

Testing against a Server

Run all text inference tests by auto-starting a server with the starter config:

OLLAMA_URL=http://localhost:11434 \
  pytest -s -v tests/integration/inference/test_text_inference.py \
   --stack-config=server:starter \
   --text-model=ollama/llama3.2:3b-instruct-fp16 \
   --embedding-model=sentence-transformers/all-MiniLM-L6-v2

Run tests with auto-server startup on a custom port:

OLLAMA_URL=http://localhost:11434 \
  pytest -s -v tests/integration/inference/ \
   --stack-config=server:starter:8322 \
   --text-model=ollama/llama3.2:3b-instruct-fp16 \
   --embedding-model=sentence-transformers/all-MiniLM-L6-v2

Testing with Library Client

The library client constructs the Stack "in-process" instead of using a server. This is useful during the iterative development process since you don't need to constantly start and stop servers.

You can do this by simply using --stack-config=starter instead of --stack-config=server:starter.

Using ad-hoc distributions

Sometimes, you may want to make up a distribution on the fly. This is useful for testing a single provider or a single API or a small combination of providers. You can do so by specifying a comma-separated list of api=provider pairs to the --stack-config option, e.g. inference=remote::ollama,safety=inline::llama-guard,agents=inline::meta-reference.

pytest -s -v tests/integration/inference/ \
   --stack-config=inference=remote::ollama,safety=inline::llama-guard,agents=inline::meta-reference \
   --text-model=$TEXT_MODELS \
   --vision-model=$VISION_MODELS \
   --embedding-model=$EMBEDDING_MODELS

Another example: Running Vector IO tests for embedding models:

pytest -s -v tests/integration/vector_io/ \
   --stack-config=inference=inline::sentence-transformers,vector_io=inline::sqlite-vec \
   --embedding-model=sentence-transformers/all-MiniLM-L6-v2

Recording Modes

The testing system supports three modes controlled by environment variables:

LIVE Mode (Default)

Tests make real API calls:

LLAMA_STACK_TEST_INFERENCE_MODE=live pytest tests/integration/

RECORD Mode

Captures API interactions for later replay:

LLAMA_STACK_TEST_INFERENCE_MODE=record \
LLAMA_STACK_TEST_RECORDING_DIR=tests/integration/recordings \
pytest tests/integration/inference/test_new_feature.py

REPLAY Mode

Uses cached responses instead of making API calls:

LLAMA_STACK_TEST_INFERENCE_MODE=replay \
LLAMA_STACK_TEST_RECORDING_DIR=tests/integration/recordings \
pytest tests/integration/

Note that right now you must specify the recording directory. This is because different tests use different recording directories and we don't (yet) have a fool-proof way to map a test to a recording directory. We are working on this.

Managing Recordings

Viewing Recordings

# See what's recorded
sqlite3 recordings/index.sqlite "SELECT endpoint, model, timestamp FROM recordings;"

# Inspect specific response
cat recordings/responses/abc123.json | jq '.'

Re-recording Tests

Use the automated workflow script for easier re-recording:

./scripts/github/schedule-record-workflow.sh --test-subdirs "inference,agents"

See the main testing guide for full details.

Local Re-recording

# Re-record specific tests
LLAMA_STACK_TEST_INFERENCE_MODE=record \
LLAMA_STACK_TEST_RECORDING_DIR=tests/integration/recordings \
pytest -s -v --stack-config=server:starter tests/integration/inference/test_modified.py

Note that when re-recording tests, you must use a Stack pointing to a server (i.e., server:starter). This subtlety exists because the set of tests run in server are a superset of the set of tests run in the library client.

Writing Tests

Basic Test Pattern

def test_basic_completion(llama_stack_client, text_model_id):
    response = llama_stack_client.inference.completion(
        model_id=text_model_id,
        content=CompletionMessage(role="user", content="Hello"),
    )

    # Test structure, not AI output quality
    assert response.completion_message is not None
    assert isinstance(response.completion_message.content, str)
    assert len(response.completion_message.content) > 0

Provider-Specific Tests

def test_asymmetric_embeddings(llama_stack_client, embedding_model_id):
    if embedding_model_id not in MODELS_SUPPORTING_TASK_TYPE:
        pytest.skip(f"Model {embedding_model_id} doesn't support task types")

    query_response = llama_stack_client.inference.embeddings(
        model_id=embedding_model_id,
        contents=["What is machine learning?"],
        task_type="query",
    )

    assert query_response.embeddings is not None