llama-stack-mirror/tests/integration/conftest.py
Sébastien Han 6039d922c0
Some checks failed
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 8s
Integration Tests / test-matrix (http, 3.11, datasets) (push) Failing after 8s
Integration Tests / test-matrix (http, 3.11, providers) (push) Failing after 6s
Integration Tests / test-matrix (http, 3.11, agents) (push) Failing after 9s
Integration Tests / test-matrix (http, 3.11, post_training) (push) Failing after 9s
Integration Tests / test-matrix (http, 3.11, vector_io) (push) Failing after 7s
Integration Tests / test-matrix (http, 3.12, inspect) (push) Failing after 6s
Integration Tests / test-matrix (http, 3.12, tool_runtime) (push) Failing after 6s
Integration Tests / test-matrix (http, 3.11, inspect) (push) Failing after 13s
Integration Tests / test-matrix (library, 3.11, datasets) (push) Failing after 5s
Integration Tests / test-matrix (http, 3.11, tool_runtime) (push) Failing after 13s
Integration Tests / test-matrix (http, 3.12, inference) (push) Failing after 11s
Integration Tests / test-matrix (http, 3.12, scoring) (push) Failing after 12s
Integration Tests / test-matrix (library, 3.11, agents) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.11, scoring) (push) Failing after 5s
Integration Tests / test-matrix (library, 3.11, vector_io) (push) Failing after 5s
Integration Tests / test-matrix (library, 3.12, datasets) (push) Failing after 5s
Integration Tests / test-matrix (library, 3.11, inspect) (push) Failing after 13s
Integration Tests / test-matrix (library, 3.11, providers) (push) Failing after 11s
Integration Tests / test-matrix (library, 3.11, tool_runtime) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.12, inference) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.12, agents) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.12, inspect) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.12, post_training) (push) Failing after 5s
Integration Tests / test-matrix (http, 3.11, scoring) (push) Failing after 28s
Integration Tests / test-matrix (http, 3.12, providers) (push) Failing after 24s
Integration Tests / test-matrix (http, 3.12, datasets) (push) Failing after 26s
Integration Tests / test-matrix (http, 3.11, inference) (push) Failing after 30s
Integration Tests / test-matrix (http, 3.12, agents) (push) Failing after 28s
Integration Tests / test-matrix (http, 3.12, post_training) (push) Failing after 26s
Integration Tests / test-matrix (http, 3.12, vector_io) (push) Failing after 23s
Test Llama Stack Build / generate-matrix (push) Successful in 5s
Integration Tests / test-matrix (library, 3.12, scoring) (push) Failing after 5s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 5s
Test External Providers / test-external-providers (venv) (push) Failing after 5s
Integration Tests / test-matrix (library, 3.11, post_training) (push) Failing after 20s
Integration Tests / test-matrix (library, 3.12, providers) (push) Failing after 7s
Unit Tests / unit-tests (3.11) (push) Failing after 7s
Update ReadTheDocs / update-readthedocs (push) Failing after 6s
Integration Tests / test-matrix (library, 3.12, vector_io) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.11, inference) (push) Failing after 22s
Test Llama Stack Build / build (push) Failing after 17s
Unit Tests / unit-tests (3.13) (push) Failing after 37s
Integration Tests / test-matrix (library, 3.12, tool_runtime) (push) Failing after 1m7s
Test Llama Stack Build / build-single-provider (push) Failing after 1m15s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 1m17s
Unit Tests / unit-tests (3.12) (push) Failing after 1m32s
Pre-commit / pre-commit (push) Failing after 2m14s
fix: allow running vector tests with embedding dimension (#2467)
# What does this PR do?

Do not force 384 for the embedding dimension, use the one provided by
the test run.

## Test Plan

```
 pytest -s -vvv tests/integration/vector_io/test_vector_io.py --stack-config=http://localhost:8321 \
    -k "not(builtin_tool or safety_with_image or code_interpreter or test_rag)" \
    --text-model="meta-llama/Llama-3.2-3B-Instruct" \
    --embedding-model=granite-embedding-125m --embedding-dimension=768
Uninstalled 1 package in 16ms
Installed 1 package in 11ms
INFO     2025-06-18 10:52:03,314 tests.integration.conftest:59 tests: Setting DISABLE_CODE_SANDBOX=1 for macOS
/Users/leseb/Documents/AI/llama-stack/.venv/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

  warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
================================================= test session starts =================================================
platform darwin -- Python 3.10.16, pytest-8.3.4, pluggy-1.5.0 -- /Users/leseb/Documents/AI/llama-stack/.venv/bin/python
cachedir: .pytest_cache
metadata: {'Python': '3.10.16', 'Platform': 'macOS-15.5-arm64-arm-64bit', 'Packages': {'pytest': '8.3.4', 'pluggy': '1.5.0'}, 'Plugins': {'cov': '6.0.0', 'html': '4.1.1', 'json-report': '1.5.0', 'timeout': '2.4.0', 'metadata': '3.1.1', 'asyncio': '0.25.3', 'anyio': '4.8.0', 'nbval': '0.11.0'}}
rootdir: /Users/leseb/Documents/AI/llama-stack
configfile: pyproject.toml
plugins: cov-6.0.0, html-4.1.1, json-report-1.5.0, timeout-2.4.0, metadata-3.1.1, asyncio-0.25.3, anyio-4.8.0, nbval-0.11.0
asyncio: mode=strict, asyncio_default_fixture_loop_scope=None
collected 8 items

tests/integration/vector_io/test_vector_io.py::test_vector_db_retrieve[emb=granite-embedding-125m:dim=768] PASSED
tests/integration/vector_io/test_vector_io.py::test_vector_db_register[emb=granite-embedding-125m:dim=768] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks[emb=granite-embedding-125m:dim=768-test_case0] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks[emb=granite-embedding-125m:dim=768-test_case1] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks[emb=granite-embedding-125m:dim=768-test_case2] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks[emb=granite-embedding-125m:dim=768-test_case3] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks[emb=granite-embedding-125m:dim=768-test_case4] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks_with_precomputed_embeddings[emb=granite-embedding-125m:dim=768] PASSED

================================================== 8 passed in 5.50s ==================================================
```

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-19 13:29:04 +05:30

192 lines
6.5 KiB
Python

# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
import inspect
import itertools
import os
import platform
import textwrap
import time
import pytest
from dotenv import load_dotenv
from llama_stack.log import get_logger
logger = get_logger(__name__, category="tests")
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
if report.when == "call":
item.execution_outcome = report.outcome
item.was_xfail = getattr(report, "wasxfail", False)
def pytest_runtest_teardown(item):
# Check if the test actually ran and passed or failed, but was not skipped or an expected failure (xfail)
outcome = getattr(item, "execution_outcome", None)
was_xfail = getattr(item, "was_xfail", False)
name = item.nodeid
if not any(x in name for x in ("inference/", "safety/", "agents/")):
return
logger.debug(f"Test '{item.nodeid}' outcome was '{outcome}' (xfail={was_xfail})")
if outcome in ("passed", "failed") and not was_xfail:
interval_seconds = os.getenv("LLAMA_STACK_TEST_INTERVAL_SECONDS")
if interval_seconds:
time.sleep(float(interval_seconds))
def pytest_configure(config):
config.option.tbstyle = "short"
config.option.disable_warnings = True
load_dotenv()
env_vars = config.getoption("--env") or []
for env_var in env_vars:
key, value = env_var.split("=", 1)
os.environ[key] = value
if platform.system() == "Darwin": # Darwin is the system name for macOS
os.environ["DISABLE_CODE_SANDBOX"] = "1"
logger.info("Setting DISABLE_CODE_SANDBOX=1 for macOS")
def pytest_addoption(parser):
parser.addoption(
"--stack-config",
help=textwrap.dedent(
"""
a 'pointer' to the stack. this can be either be:
(a) a template name like `fireworks`, or
(b) a path to a run.yaml file, or
(c) an adhoc config spec, e.g. `inference=fireworks,safety=llama-guard,agents=meta-reference`
"""
),
)
parser.addoption("--env", action="append", help="Set environment variables, e.g. --env KEY=value")
parser.addoption(
"--text-model",
help="comma-separated list of text models. Fixture name: text_model_id",
)
parser.addoption(
"--vision-model",
help="comma-separated list of vision models. Fixture name: vision_model_id",
)
parser.addoption(
"--embedding-model",
help="comma-separated list of embedding models. Fixture name: embedding_model_id",
)
parser.addoption(
"--safety-shield",
help="comma-separated list of safety shields. Fixture name: shield_id",
)
parser.addoption(
"--judge-model",
help="Specify the judge model to use for testing",
)
parser.addoption(
"--embedding-dimension",
type=int,
default=384,
help="Output dimensionality of the embedding model to use for testing. Default: 384",
)
parser.addoption(
"--record-responses",
action="store_true",
help="Record new API responses instead of using cached ones.",
)
parser.addoption(
"--report",
help="Path where the test report should be written, e.g. --report=/path/to/report.md",
)
MODEL_SHORT_IDS = {
"meta-llama/Llama-3.2-3B-Instruct": "3B",
"meta-llama/Llama-3.1-8B-Instruct": "8B",
"meta-llama/Llama-3.1-70B-Instruct": "70B",
"meta-llama/Llama-3.1-405B-Instruct": "405B",
"meta-llama/Llama-3.2-11B-Vision-Instruct": "11B",
"meta-llama/Llama-3.2-90B-Vision-Instruct": "90B",
"meta-llama/Llama-3.3-70B-Instruct": "70B",
"meta-llama/Llama-Guard-3-1B": "Guard1B",
"meta-llama/Llama-Guard-3-8B": "Guard8B",
"all-MiniLM-L6-v2": "MiniLM",
}
def get_short_id(value):
return MODEL_SHORT_IDS.get(value, value)
def pytest_generate_tests(metafunc):
"""
This is the main function which processes CLI arguments and generates various combinations of parameters.
It is also responsible for generating test IDs which are succinct enough.
Each option can be comma separated list of values which results in multiple parameter combinations.
"""
params = []
param_values = {}
id_parts = []
# Map of fixture name to its CLI option and ID prefix
fixture_configs = {
"text_model_id": ("--text-model", "txt"),
"vision_model_id": ("--vision-model", "vis"),
"embedding_model_id": ("--embedding-model", "emb"),
"shield_id": ("--safety-shield", "shield"),
"judge_model_id": ("--judge-model", "judge"),
"embedding_dimension": ("--embedding-dimension", "dim"),
}
# Collect all parameters and their values
for fixture_name, (option, id_prefix) in fixture_configs.items():
if fixture_name not in metafunc.fixturenames:
continue
params.append(fixture_name)
val = metafunc.config.getoption(option)
values = [v.strip() for v in str(val).split(",")] if val else [None]
param_values[fixture_name] = values
if val:
id_parts.extend(f"{id_prefix}={get_short_id(v)}" for v in values)
if not params:
return
# Generate all combinations of parameter values
value_combinations = list(itertools.product(*[param_values[p] for p in params]))
# Generate test IDs
test_ids = []
non_empty_params = [(i, values) for i, values in enumerate(param_values.values()) if values[0] is not None]
# Get actual function parameters using inspect
test_func_params = set(inspect.signature(metafunc.function).parameters.keys())
if non_empty_params:
# For each combination, build an ID from the non-None parameters
for combo in value_combinations:
parts = []
for param_name, val in zip(params, combo, strict=True):
# Only include if parameter is in test function signature and value is meaningful
if param_name in test_func_params and val:
prefix = fixture_configs[param_name][1] # Get the ID prefix
parts.append(f"{prefix}={get_short_id(val)}")
if parts:
test_ids.append(":".join(parts))
metafunc.parametrize(params, value_combinations, scope="session", ids=test_ids if test_ids else None)
pytest_plugins = ["tests.integration.fixtures.common"]