mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-14 09:06:10 +00:00
refactor(test): introduce --stack-config and simplify options (#1404)
You now run the integration tests with these options: ```bash Custom options: --stack-config=STACK_CONFIG a 'pointer' to the stack. this can be either be: (a) a template name like `fireworks`, or (b) a path to a run.yaml file, or (c) an adhoc config spec, e.g. `inference=fireworks,safety=llama-guard,agents=meta- reference` --env=ENV Set environment variables, e.g. --env KEY=value --text-model=TEXT_MODEL comma-separated list of text models. Fixture name: text_model_id --vision-model=VISION_MODEL comma-separated list of vision models. Fixture name: vision_model_id --embedding-model=EMBEDDING_MODEL comma-separated list of embedding models. Fixture name: embedding_model_id --safety-shield=SAFETY_SHIELD comma-separated list of safety shields. Fixture name: shield_id --judge-model=JUDGE_MODEL comma-separated list of judge models. Fixture name: judge_model_id --embedding-dimension=EMBEDDING_DIMENSION Output dimensionality of the embedding model to use for testing. Default: 384 --record-responses Record new API responses instead of using cached ones. --report=REPORT Path where the test report should be written, e.g. --report=/path/to/report.md ``` Importantly, if you don't specify any of the models (text-model, vision-model, etc.) the relevant tests will get **skipped!** This will make running tests somewhat more annoying since all options will need to be specified. We will make this easier by adding some easy wrapper yaml configs. ## Test Plan Example: ```bash ashwin@ashwin-mbp ~/local/llama-stack/tests/integration (unify_tests) $ LLAMA_STACK_CONFIG=fireworks pytest -s -v inference/test_text_inference.py \ --text-model meta-llama/Llama-3.2-3B-Instruct ```
This commit is contained in:
parent
a0d6b165b0
commit
2fe976ed0a
15 changed files with 536 additions and 1144 deletions
|
@ -1,109 +0,0 @@
|
|||
# Testing Llama Stack Providers
|
||||
|
||||
The Llama Stack is designed as a collection of Lego blocks -- various APIs -- which are composable and can be used to quickly and reliably build an app. We need a testing setup which is relatively flexible to enable easy combinations of these providers.
|
||||
|
||||
We use `pytest` and all of its dynamism to enable the features needed. Specifically:
|
||||
|
||||
- We use `pytest_addoption` to add CLI options allowing you to override providers, models, etc.
|
||||
|
||||
- We use `pytest_generate_tests` to dynamically parametrize our tests. This allows us to support a default set of (providers, models, etc.) combinations but retain the flexibility to override them via the CLI if needed.
|
||||
|
||||
- We use `pytest_configure` to make sure we dynamically add appropriate marks based on the fixtures we make.
|
||||
|
||||
- We use `pytest_collection_modifyitems` to filter tests based on the test config (if specified).
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
Your development environment should have been configured as per the instructions in the
|
||||
[CONTRIBUTING.md](../../../CONTRIBUTING.md) file. In particular, make sure to install the test extra
|
||||
dependencies. Below is the full configuration:
|
||||
|
||||
|
||||
```bash
|
||||
cd llama-stack
|
||||
uv sync --extra dev --extra test
|
||||
uv pip install -e .
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
## Common options
|
||||
|
||||
All tests support a `--providers` option which can be a string of the form `api1=provider_fixture1,api2=provider_fixture2`. So, when testing safety (which need inference and safety APIs) you can use `--providers inference=together,safety=meta_reference` to use these fixtures in concert.
|
||||
|
||||
Depending on the API, there are custom options enabled. For example, `inference` tests allow for an `--inference-model` override, etc.
|
||||
|
||||
By default, we disable warnings and enable short tracebacks. You can override them using pytest's flags as appropriate.
|
||||
|
||||
Some providers need special API keys or other configuration options to work. You can check out the individual fixtures (located in `tests/<api>/fixtures.py`) for what these keys are. These can be specified using the `--env` CLI option. You can also have it be present in the environment (exporting in your shell) or put it in the `.env` file in the directory from which you run the test. For example, to use the Together fixture you can use `--env TOGETHER_API_KEY=<...>`
|
||||
|
||||
## Inference
|
||||
|
||||
We have the following orthogonal parametrizations (pytest "marks") for inference tests:
|
||||
- providers: (meta_reference, together, fireworks, ollama)
|
||||
- models: (llama_8b, llama_3b)
|
||||
|
||||
If you want to run a test with the llama_8b model with fireworks, you can use:
|
||||
```bash
|
||||
pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
|
||||
-m "fireworks and llama_8b" \
|
||||
--env FIREWORKS_API_KEY=<...>
|
||||
```
|
||||
|
||||
You can make it more complex to run both llama_8b and llama_3b on Fireworks, but only llama_3b with Ollama:
|
||||
```bash
|
||||
pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
|
||||
-m "fireworks or (ollama and llama_3b)" \
|
||||
--env FIREWORKS_API_KEY=<...>
|
||||
```
|
||||
|
||||
Finally, you can override the model completely by doing:
|
||||
```bash
|
||||
pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
|
||||
-m fireworks \
|
||||
--inference-model "meta-llama/Llama3.1-70B-Instruct" \
|
||||
--env FIREWORKS_API_KEY=<...>
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> If you’re using `uv`, you can isolate test executions by prefixing all commands with `uv run pytest...`.
|
||||
|
||||
## Agents
|
||||
|
||||
The Agents API composes three other APIs underneath:
|
||||
- Inference
|
||||
- Safety
|
||||
- Memory
|
||||
|
||||
Given that each of these has several fixtures each, the set of combinations is large. We provide a default set of combinations (see `tests/agents/conftest.py`) with easy to use "marks":
|
||||
- `meta_reference` -- uses all the `meta_reference` fixtures for the dependent APIs
|
||||
- `together` -- uses Together for inference, and `meta_reference` for the rest
|
||||
- `ollama` -- uses Ollama for inference, and `meta_reference` for the rest
|
||||
|
||||
An example test with Together:
|
||||
```bash
|
||||
pytest -s -m together llama_stack/providers/tests/agents/test_agents.py \
|
||||
--env TOGETHER_API_KEY=<...>
|
||||
```
|
||||
|
||||
If you want to override the inference model or safety model used, you can use the `--inference-model` or `--safety-shield` CLI options as appropriate.
|
||||
|
||||
If you wanted to test a remotely hosted stack, you can use `-m remote` as follows:
|
||||
```bash
|
||||
pytest -s -m remote llama_stack/providers/tests/agents/test_agents.py \
|
||||
--env REMOTE_STACK_URL=<...>
|
||||
```
|
||||
|
||||
## Test Config
|
||||
If you want to run a test suite with a custom set of tests and parametrizations, you can define a YAML test config under llama_stack/providers/tests/ folder and pass the filename through `--config` option as follows:
|
||||
|
||||
```
|
||||
pytest llama_stack/providers/tests/ --config=ci_test_config.yaml
|
||||
```
|
||||
|
||||
### Test config format
|
||||
Currently, we support test config on inference, agents and memory api tests.
|
||||
|
||||
Example format of test config can be found in ci_test_config.yaml.
|
||||
|
||||
## Test Data
|
||||
We encourage providers to use our test data for internal development testing, so to make it easier and consistent with the tests we provide. Each test case may define its own data format, and please refer to our test source code to get details on how these fields are used in the test.
|
|
@ -1,101 +0,0 @@
|
|||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This source code is licensed under the terms described in the LICENSE file in
|
||||
# the root directory of this source tree.
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from llama_stack.apis.benchmarks import BenchmarkInput
|
||||
from llama_stack.apis.datasets import DatasetInput
|
||||
from llama_stack.apis.models import ModelInput
|
||||
from llama_stack.apis.scoring_functions import ScoringFnInput
|
||||
from llama_stack.apis.shields import ShieldInput
|
||||
from llama_stack.apis.tools import ToolGroupInput
|
||||
from llama_stack.apis.vector_dbs import VectorDBInput
|
||||
from llama_stack.distribution.build import print_pip_install_help
|
||||
from llama_stack.distribution.configure import parse_and_maybe_upgrade_config
|
||||
from llama_stack.distribution.datatypes import Provider, StackRunConfig
|
||||
from llama_stack.distribution.distribution import get_provider_registry
|
||||
from llama_stack.distribution.request_headers import set_request_provider_data
|
||||
from llama_stack.distribution.resolver import resolve_remote_stack_impls
|
||||
from llama_stack.distribution.stack import construct_stack
|
||||
from llama_stack.providers.datatypes import Api, RemoteProviderConfig
|
||||
from llama_stack.providers.utils.kvstore.config import SqliteKVStoreConfig
|
||||
|
||||
|
||||
class TestStack(BaseModel):
|
||||
impls: Dict[Api, Any]
|
||||
run_config: StackRunConfig
|
||||
|
||||
|
||||
async def construct_stack_for_test(
|
||||
apis: List[Api],
|
||||
providers: Dict[str, List[Provider]],
|
||||
provider_data: Optional[Dict[str, Any]] = None,
|
||||
models: Optional[List[ModelInput]] = None,
|
||||
shields: Optional[List[ShieldInput]] = None,
|
||||
vector_dbs: Optional[List[VectorDBInput]] = None,
|
||||
datasets: Optional[List[DatasetInput]] = None,
|
||||
scoring_fns: Optional[List[ScoringFnInput]] = None,
|
||||
benchmarks: Optional[List[BenchmarkInput]] = None,
|
||||
tool_groups: Optional[List[ToolGroupInput]] = None,
|
||||
) -> TestStack:
|
||||
sqlite_file = tempfile.NamedTemporaryFile(delete=False, suffix=".db")
|
||||
run_config = dict(
|
||||
image_name="test-fixture",
|
||||
apis=apis,
|
||||
providers=providers,
|
||||
metadata_store=SqliteKVStoreConfig(db_path=sqlite_file.name),
|
||||
models=models or [],
|
||||
shields=shields or [],
|
||||
vector_dbs=vector_dbs or [],
|
||||
datasets=datasets or [],
|
||||
scoring_fns=scoring_fns or [],
|
||||
benchmarks=benchmarks or [],
|
||||
tool_groups=tool_groups or [],
|
||||
)
|
||||
run_config = parse_and_maybe_upgrade_config(run_config)
|
||||
try:
|
||||
remote_config = remote_provider_config(run_config)
|
||||
if not remote_config:
|
||||
# TODO: add to provider registry by creating interesting mocks or fakes
|
||||
impls = await construct_stack(run_config, get_provider_registry())
|
||||
else:
|
||||
# we don't register resources for a remote stack as part of the fixture setup
|
||||
# because the stack is already "up". if a test needs to register resources, it
|
||||
# can do so manually always.
|
||||
|
||||
impls = await resolve_remote_stack_impls(remote_config, run_config.apis)
|
||||
|
||||
test_stack = TestStack(impls=impls, run_config=run_config)
|
||||
except ModuleNotFoundError as e:
|
||||
print_pip_install_help(providers)
|
||||
raise e
|
||||
|
||||
if provider_data:
|
||||
set_request_provider_data({"X-LlamaStack-Provider-Data": json.dumps(provider_data)})
|
||||
|
||||
return test_stack
|
||||
|
||||
|
||||
def remote_provider_config(
|
||||
run_config: StackRunConfig,
|
||||
) -> Optional[RemoteProviderConfig]:
|
||||
remote_config = None
|
||||
has_non_remote = False
|
||||
for api_providers in run_config.providers.values():
|
||||
for provider in api_providers:
|
||||
if provider.provider_type == "test::remote":
|
||||
remote_config = RemoteProviderConfig(**provider.config)
|
||||
else:
|
||||
has_non_remote = True
|
||||
|
||||
if remote_config:
|
||||
assert not has_non_remote, "Remote stack cannot have non-remote providers"
|
||||
|
||||
return remote_config
|
Loading…
Add table
Add a link
Reference in a new issue