diff --git a/tests/integration/README.md b/tests/integration/README.md index cd2b07b8c..c7a8b4722 100644 --- a/tests/integration/README.md +++ b/tests/integration/README.md @@ -1,31 +1,87 @@ # Llama Stack Integration Tests -You can run llama stack integration tests on either a Llama Stack Library or a Llama Stack endpoint. -To test on a Llama Stack library with certain configuration, run +We use `pytest` for parameterizing and running tests. You can see all options with: ```bash -LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml pytest -s -v tests/api/inference/ -``` -or just the template name -```bash -LLAMA_STACK_CONFIG=together pytest -s -v tests/api/inference/ +cd tests/integration + +# this will show a long list of options, look for "Custom options:" +pytest --help ``` -To test on a Llama Stack endpoint, run +Here are the most important options: +- `--stack-config`: specify the stack config to use. You have three ways to point to a stack: + - a URL which points to a Llama Stack distribution server + - a template (e.g., `fireworks`, `together`) or a path to a run.yaml file + - a comma-separated list of api=provider pairs, e.g. `inference=fireworks,safety=llama-guard,agents=meta-reference`. This is most useful for testing a single API surface. +- `--env`: set environment variables, e.g. --env KEY=value. this is a utility option to set environment variables required by various providers. + +Model parameters can be influenced by the following options: +- `--text-model`: comma-separated list of text models. +- `--vision-model`: comma-separated list of vision models. +- `--embedding-model`: comma-separated list of embedding models. +- `--safety-shield`: comma-separated list of safety shields. +- `--judge-model`: comma-separated list of judge models. +- `--embedding-dimension`: output dimensionality of the embedding model to use for testing. Default: 384 + +Each of these are comma-separated lists and can be used to generate multiple parameter combinations. + + +Experimental, under development, options: +- `--record-responses`: record new API responses instead of using cached ones +- `--report`: path where the test report should be written, e.g. --report=/path/to/report.md + + +## Examples + +Run all text inference tests with the `together` distribution: + ```bash -LLAMA_STACK_BASE_URL=http://localhost:8089 pytest -s -v tests/api/inference +pytest -s -v tests/api/inference/test_text_inference.py \ + --stack-config=together \ + --text-model=meta-llama/Llama-3.1-8B-Instruct ``` -## Report Generation +Run all text inference tests with the `together` distribution and `meta-llama/Llama-3.1-8B-Instruct`: -To generate a report, run with `--report` option ```bash -LLAMA_STACK_CONFIG=together pytest -s -v report.md tests/api/ --report +pytest -s -v tests/api/inference/test_text_inference.py \ + --stack-config=together \ + --text-model=meta-llama/Llama-3.1-8B-Instruct ``` -## Common options -Depending on the API, there are custom options enabled -- For tests in `inference/` and `agents/, we support `--inference-model` (to be used in text inference tests) and `--vision-inference-model` (only used in image inference tests) overrides -- For tests in `vector_io/`, we support `--embedding-model` override -- For tests in `safety/`, we support `--safety-shield` override -- The param can be `--report` or `--report ` -If path is not provided, we do a best effort to infer based on the config / template name. For url endpoints, path is required. +Running all inference tests for a number of models: + +```bash +TEXT_MODELS=meta-llama/Llama-3.1-8B-Instruct,meta-llama/Llama-3.1-70B-Instruct +VISION_MODELS=meta-llama/Llama-3.2-11B-Vision-Instruct +EMBEDDING_MODELS=all-MiniLM-L6-v2 +TOGETHER_API_KEY=... + +pytest -s -v tests/api/inference/ \ + --stack-config=together \ + --text-model=$TEXT_MODELS \ + --vision-model=$VISION_MODELS \ + --embedding-model=$EMBEDDING_MODELS +``` + +Same thing but instead of using the distribution, use an adhoc stack with just one provider (`fireworks` for inference): + +```bash +FIREWORKS_API_KEY=... + +pytest -s -v tests/api/inference/ \ + --stack-config=inference=fireworks \ + --text-model=$TEXT_MODELS \ + --vision-model=$VISION_MODELS \ + --embedding-model=$EMBEDDING_MODELS +``` + +Running Vector IO tests for a number of embedding models: + +```bash +EMBEDDING_MODELS=all-MiniLM-L6-v2 + +pytest -s -v tests/api/vector_io/ \ + --stack-config=inference=sentence-transformers,vector_io=sqlite-vec \ + --embedding-model=$EMBEDDING_MODELS +``` diff --git a/tests/integration/README.md.old b/tests/integration/README.md.old deleted file mode 100644 index 8daaa4718..000000000 --- a/tests/integration/README.md.old +++ /dev/null @@ -1,109 +0,0 @@ -# Testing Llama Stack Providers - -The Llama Stack is designed as a collection of Lego blocks -- various APIs -- which are composable and can be used to quickly and reliably build an app. We need a testing setup which is relatively flexible to enable easy combinations of these providers. - -We use `pytest` and all of its dynamism to enable the features needed. Specifically: - -- We use `pytest_addoption` to add CLI options allowing you to override providers, models, etc. - -- We use `pytest_generate_tests` to dynamically parametrize our tests. This allows us to support a default set of (providers, models, etc.) combinations but retain the flexibility to override them via the CLI if needed. - -- We use `pytest_configure` to make sure we dynamically add appropriate marks based on the fixtures we make. - -- We use `pytest_collection_modifyitems` to filter tests based on the test config (if specified). - -## Pre-requisites - -Your development environment should have been configured as per the instructions in the -[CONTRIBUTING.md](../../../CONTRIBUTING.md) file. In particular, make sure to install the test extra -dependencies. Below is the full configuration: - - -```bash -cd llama-stack -uv sync --extra dev --extra test -uv pip install -e . -source .venv/bin/activate -``` - -## Common options - -All tests support a `--providers` option which can be a string of the form `api1=provider_fixture1,api2=provider_fixture2`. So, when testing safety (which need inference and safety APIs) you can use `--providers inference=together,safety=meta_reference` to use these fixtures in concert. - -Depending on the API, there are custom options enabled. For example, `inference` tests allow for an `--inference-model` override, etc. - -By default, we disable warnings and enable short tracebacks. You can override them using pytest's flags as appropriate. - -Some providers need special API keys or other configuration options to work. You can check out the individual fixtures (located in `tests//fixtures.py`) for what these keys are. These can be specified using the `--env` CLI option. You can also have it be present in the environment (exporting in your shell) or put it in the `.env` file in the directory from which you run the test. For example, to use the Together fixture you can use `--env TOGETHER_API_KEY=<...>` - -## Inference - -We have the following orthogonal parametrizations (pytest "marks") for inference tests: -- providers: (meta_reference, together, fireworks, ollama) -- models: (llama_8b, llama_3b) - -If you want to run a test with the llama_8b model with fireworks, you can use: -```bash -pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \ - -m "fireworks and llama_8b" \ - --env FIREWORKS_API_KEY=<...> -``` - -You can make it more complex to run both llama_8b and llama_3b on Fireworks, but only llama_3b with Ollama: -```bash -pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \ - -m "fireworks or (ollama and llama_3b)" \ - --env FIREWORKS_API_KEY=<...> -``` - -Finally, you can override the model completely by doing: -```bash -pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \ - -m fireworks \ - --inference-model "meta-llama/Llama3.1-70B-Instruct" \ - --env FIREWORKS_API_KEY=<...> -``` - -> [!TIP] -> If you’re using `uv`, you can isolate test executions by prefixing all commands with `uv run pytest...`. - -## Agents - -The Agents API composes three other APIs underneath: -- Inference -- Safety -- Memory - -Given that each of these has several fixtures each, the set of combinations is large. We provide a default set of combinations (see `tests/agents/conftest.py`) with easy to use "marks": -- `meta_reference` -- uses all the `meta_reference` fixtures for the dependent APIs -- `together` -- uses Together for inference, and `meta_reference` for the rest -- `ollama` -- uses Ollama for inference, and `meta_reference` for the rest - -An example test with Together: -```bash -pytest -s -m together llama_stack/providers/tests/agents/test_agents.py \ - --env TOGETHER_API_KEY=<...> - ``` - -If you want to override the inference model or safety model used, you can use the `--inference-model` or `--safety-shield` CLI options as appropriate. - -If you wanted to test a remotely hosted stack, you can use `-m remote` as follows: -```bash -pytest -s -m remote llama_stack/providers/tests/agents/test_agents.py \ - --env REMOTE_STACK_URL=<...> -``` - -## Test Config -If you want to run a test suite with a custom set of tests and parametrizations, you can define a YAML test config under llama_stack/providers/tests/ folder and pass the filename through `--config` option as follows: - -``` -pytest llama_stack/providers/tests/ --config=ci_test_config.yaml -``` - -### Test config format -Currently, we support test config on inference, agents and memory api tests. - -Example format of test config can be found in ci_test_config.yaml. - -## Test Data -We encourage providers to use our test data for internal development testing, so to make it easier and consistent with the tests we provide. Each test case may define its own data format, and please refer to our test source code to get details on how these fields are used in the test.