llama-stack/tests/integration
Ashwin Bharambe b440a1dc42
test: make sure integration tests runs against the server (#1743)
Previously, the integration tests started the server, but never really
used it because `--stack-config=ollama` uses the ollama template and the
inline "llama stack as library" client, not the HTTP client.

This PR makes sure we test it both ways.

We also add agents tests to the mix.

## Test Plan 

GitHub

---------

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Sébastien Han <seb@redhat.com>
2025-03-31 22:38:47 +02:00
..
agents fix: skip code interp (#1827) 2025-03-28 12:58:08 -07:00
datasets feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
eval fix: fix jobs api literal return type (#1757) 2025-03-21 14:04:21 -07:00
fixtures test: turn off recordable mock for now (#1616) 2025-03-13 13:18:08 -07:00
inference test: make sure integration tests runs against the server (#1743) 2025-03-31 22:38:47 +02:00
inspect test: add inspect unit test (#1417) 2025-03-10 15:36:18 -07:00
post_training refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
providers fix: a couple of tests were broken and not yet exercised by our per-PR test workflow 2025-03-21 12:12:14 -07:00
safety fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
scoring feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
telemetry fix(telemetry): library client does not log span (#1833) 2025-03-29 14:55:31 -07:00
test_cases feat: Support "stop" parameter in remote:vLLM (#1715) 2025-03-24 12:42:55 -07:00
tool_runtime refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
tools fix: toolgroups unregister (#1704) 2025-03-19 13:43:51 -07:00
vector_io fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
conftest.py fix: sleep between tests oof 2025-03-14 14:45:37 -07:00
metadata.py refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
README.md docs: Update readme for integration tests (#1846) 2025-03-31 22:00:02 +02:00
report.py refactor(test): introduce --stack-config and simplify options (#1404) 2025-03-05 17:02:02 -08:00

Llama Stack Integration Tests

We use pytest for parameterizing and running tests. You can see all options with:

cd tests/integration

# this will show a long list of options, look for "Custom options:"
pytest --help

Here are the most important options:

  • --stack-config: specify the stack config to use. You have three ways to point to a stack:
    • a URL which points to a Llama Stack distribution server
    • a template (e.g., fireworks, together) or a path to a run.yaml file
    • a comma-separated list of api=provider pairs, e.g. inference=fireworks,safety=llama-guard,agents=meta-reference. This is most useful for testing a single API surface.
  • --env: set environment variables, e.g. --env KEY=value. this is a utility option to set environment variables required by various providers.

Model parameters can be influenced by the following options:

  • --text-model: comma-separated list of text models.
  • --vision-model: comma-separated list of vision models.
  • --embedding-model: comma-separated list of embedding models.
  • --safety-shield: comma-separated list of safety shields.
  • --judge-model: comma-separated list of judge models.
  • --embedding-dimension: output dimensionality of the embedding model to use for testing. Default: 384

Each of these are comma-separated lists and can be used to generate multiple parameter combinations. Note that tests will be skipped if no model is specified.

Experimental, under development, options:

  • --record-responses: record new API responses instead of using cached ones
  • --report: path where the test report should be written, e.g. --report=/path/to/report.md

Examples

Run all text inference tests with the together distribution:

pytest -s -v tests/integration/inference/test_text_inference.py \
   --stack-config=together \
   --text-model=meta-llama/Llama-3.1-8B-Instruct

Run all text inference tests with the together distribution and meta-llama/Llama-3.1-8B-Instruct:

pytest -s -v tests/integration/inference/test_text_inference.py \
   --stack-config=together \
   --text-model=meta-llama/Llama-3.1-8B-Instruct

Running all inference tests for a number of models:

TEXT_MODELS=meta-llama/Llama-3.1-8B-Instruct,meta-llama/Llama-3.1-70B-Instruct
VISION_MODELS=meta-llama/Llama-3.2-11B-Vision-Instruct
EMBEDDING_MODELS=all-MiniLM-L6-v2
export TOGETHER_API_KEY=<together_api_key>

pytest -s -v tests/integration/inference/ \
   --stack-config=together \
   --text-model=$TEXT_MODELS \
   --vision-model=$VISION_MODELS \
   --embedding-model=$EMBEDDING_MODELS

Same thing but instead of using the distribution, use an adhoc stack with just one provider (fireworks for inference):

export FIREWORKS_API_KEY=<fireworks_api_key>

pytest -s -v tests/integration/inference/ \
   --stack-config=inference=fireworks \
   --text-model=$TEXT_MODELS \
   --vision-model=$VISION_MODELS \
   --embedding-model=$EMBEDDING_MODELS

Running Vector IO tests for a number of embedding models:

EMBEDDING_MODELS=all-MiniLM-L6-v2

pytest -s -v tests/integration/vector_io/ \
   --stack-config=inference=sentence-transformers,vector_io=sqlite-vec \
   --embedding-model=$EMBEDDING_MODELS