llama-stack-mirror/tests/unit
slekkala1 eb2b240594
fix: remove consistency checks (#3881)
# What does this PR do?
metadata is conflicting with the default embedding model set on server
side via extra body, removing the check and just letting metadata take
precedence over extra body

`ValueError: Embedding model inconsistent between metadata
('text-embedding-3-small') and extra_body
     ('sentence-transformers/nomic-ai/nomic-embed-text-v1.5')`
## Test Plan
CI
2025-10-21 14:40:14 -07:00
..
cli feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
conversations feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
core chore(cleanup)!: kill vector_db references as far as possible (#3864) 2025-10-20 20:06:16 -07:00
distribution chore: remove build.py (#3869) 2025-10-20 16:28:15 -07:00
files feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
models feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
prompts/prompts feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
providers fix: remove consistency checks (#3881) 2025-10-21 14:40:14 -07:00
rag revert: "chore(cleanup)!: remove tool_runtime.rag_tool" (#3877) 2025-10-21 11:22:06 -07:00
registry chore(cleanup)!: kill vector_db references as far as possible (#3864) 2025-10-20 20:06:16 -07:00
server chore(cleanup)!: kill vector_db references as far as possible (#3864) 2025-10-20 20:06:16 -07:00
tools feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
utils feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
__init__.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
conftest.py fix(tests): reduce some test noise (#3825) 2025-10-16 09:52:16 -07:00
fixtures.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
README.md test: Measure and track code coverage (#2636) 2025-07-18 18:08:36 +02:00

Llama Stack Unit Tests

Unit Tests

Unit tests verify individual components and functions in isolation. They are fast, reliable, and don't require external services.

Prerequisites

  1. Python Environment: Ensure you have Python 3.12+ installed
  2. uv Package Manager: Install uv if not already installed

You can run the unit tests by running:

./scripts/unit-tests.sh [PYTEST_ARGS]

Any additional arguments are passed to pytest. For example, you can specify a test directory, a specific test file, or any pytest flags (e.g., -vvv for verbosity). If no test directory is specified, it defaults to "tests/unit", e.g:

./scripts/unit-tests.sh tests/unit/registry/test_registry.py -vvv

If you'd like to run for a non-default version of Python (currently 3.12), pass PYTHON_VERSION variable as follows:

source .venv/bin/activate
PYTHON_VERSION=3.13 ./scripts/unit-tests.sh

Test Configuration

  • Test Discovery: Tests are automatically discovered in the tests/unit/ directory
  • Async Support: Tests use --asyncio-mode=auto for automatic async test handling
  • Coverage: Tests generate coverage reports in htmlcov/ directory
  • Python Version: Defaults to Python 3.12, but can be overridden with PYTHON_VERSION environment variable

Coverage Reports

After running tests, you can view coverage reports:

# Open HTML coverage report in browser
open htmlcov/index.html  # macOS
xdg-open htmlcov/index.html  # Linux
start htmlcov/index.html  # Windows