llama-stack-mirror/tests/unit
Ashwin Bharambe 7b90e0e9c8
test: suppress expected error logs in SSE test (#3886)
Our unit test outputs are filled with all kinds of obscene logs. This
makes it really hard to spot real issues quickly. The problem is that
these logs are necessary to output at the given logging level when the
server is operating normally. It's just that we don't want to see some
of them (especially the noisy ones) during tests.

This PR begins the cleanup. We pytest's caplog fixture to for
suppression.
2025-10-22 14:34:32 -07:00
..
cli feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
conversations fix(conversations)!: update Conversations API definitions (was: bump openai from 1.107.0 to 2.5.0) (#3847) 2025-10-22 12:32:48 -07:00
core chore(cleanup)!: kill vector_db references as far as possible (#3864) 2025-10-20 20:06:16 -07:00
distribution chore: remove build.py (#3869) 2025-10-20 16:28:15 -07:00
files feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
models feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
prompts/prompts feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
providers fix(conversations)!: update Conversations API definitions (was: bump openai from 1.107.0 to 2.5.0) (#3847) 2025-10-22 12:32:48 -07:00
rag revert: "chore(cleanup)!: remove tool_runtime.rag_tool" (#3877) 2025-10-21 11:22:06 -07:00
registry chore(cleanup)!: kill vector_db references as far as possible (#3864) 2025-10-20 20:06:16 -07:00
server test: suppress expected error logs in SSE test (#3886) 2025-10-22 14:34:32 -07:00
tools feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
utils feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
__init__.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
conftest.py test: suppress expected error logs in SSE test (#3886) 2025-10-22 14:34:32 -07:00
fixtures.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
README.md test: Measure and track code coverage (#2636) 2025-07-18 18:08:36 +02:00

Llama Stack Unit Tests

Unit Tests

Unit tests verify individual components and functions in isolation. They are fast, reliable, and don't require external services.

Prerequisites

  1. Python Environment: Ensure you have Python 3.12+ installed
  2. uv Package Manager: Install uv if not already installed

You can run the unit tests by running:

./scripts/unit-tests.sh [PYTEST_ARGS]

Any additional arguments are passed to pytest. For example, you can specify a test directory, a specific test file, or any pytest flags (e.g., -vvv for verbosity). If no test directory is specified, it defaults to "tests/unit", e.g:

./scripts/unit-tests.sh tests/unit/registry/test_registry.py -vvv

If you'd like to run for a non-default version of Python (currently 3.12), pass PYTHON_VERSION variable as follows:

source .venv/bin/activate
PYTHON_VERSION=3.13 ./scripts/unit-tests.sh

Test Configuration

  • Test Discovery: Tests are automatically discovered in the tests/unit/ directory
  • Async Support: Tests use --asyncio-mode=auto for automatic async test handling
  • Coverage: Tests generate coverage reports in htmlcov/ directory
  • Python Version: Defaults to Python 3.12, but can be overridden with PYTHON_VERSION environment variable

Coverage Reports

After running tests, you can view coverage reports:

# Open HTML coverage report in browser
open htmlcov/index.html  # macOS
xdg-open htmlcov/index.html  # Linux
start htmlcov/index.html  # Windows