llama-stack-mirror/tests/unit
Ashwin Bharambe 3ecb043d59 fix(context): prevent provider data leak between streaming requests
The preserve_contexts_async_generator function was not cleaning up context
variables after streaming iterations, causing PROVIDER_DATA_VAR to leak
between sequential requests. Provider credentials or configuration from one
request could persist and leak into subsequent requests.

Root cause: Context variables were set at the start of each iteration but
never cleared afterward. When generators were consumed outside their original
context manager (after the with block exited), the context values remained
set indefinitely.

The fix clears context variables by setting them to None after each yield
and when the generator terminates. This works reliably across all scenarios
including when the library client wraps async generators for sync consumption
(which creates new asyncio Contexts per iteration). Direct value setting
avoids Context-scoped token issues that would occur with token-based reset.

Added unit and integration tests that verify context isolation.
2025-10-27 13:41:05 -07:00
..
cli feat(prompts): attach prompts to storage stores in run configs (#3893) 2025-10-27 11:12:12 -07:00
conversations fix(conversations)!: update Conversations API definitions (was: bump openai from 1.107.0 to 2.5.0) (#3847) 2025-10-22 12:32:48 -07:00
core fix(context): prevent provider data leak between streaming requests 2025-10-27 13:41:05 -07:00
distribution feat(prompts): attach prompts to storage stores in run configs (#3893) 2025-10-27 11:12:12 -07:00
files feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
models feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
prompts/prompts feat(prompts): attach prompts to storage stores in run configs (#3893) 2025-10-27 11:12:12 -07:00
providers fix!: Enhance response API support to not fail with tool calling (#3385) 2025-10-27 09:33:02 -07:00
rag revert: "chore(cleanup)!: remove tool_runtime.rag_tool" (#3877) 2025-10-21 11:22:06 -07:00
registry chore(cleanup)!: kill vector_db references as far as possible (#3864) 2025-10-20 20:06:16 -07:00
server test: suppress expected error logs in SSE test (#3886) 2025-10-22 14:34:32 -07:00
tools feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
utils feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
__init__.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
conftest.py test: suppress expected error logs in SSE test (#3886) 2025-10-22 14:34:32 -07:00
fixtures.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
README.md test: Measure and track code coverage (#2636) 2025-07-18 18:08:36 +02:00

Llama Stack Unit Tests

Unit Tests

Unit tests verify individual components and functions in isolation. They are fast, reliable, and don't require external services.

Prerequisites

  1. Python Environment: Ensure you have Python 3.12+ installed
  2. uv Package Manager: Install uv if not already installed

You can run the unit tests by running:

./scripts/unit-tests.sh [PYTEST_ARGS]

Any additional arguments are passed to pytest. For example, you can specify a test directory, a specific test file, or any pytest flags (e.g., -vvv for verbosity). If no test directory is specified, it defaults to "tests/unit", e.g:

./scripts/unit-tests.sh tests/unit/registry/test_registry.py -vvv

If you'd like to run for a non-default version of Python (currently 3.12), pass PYTHON_VERSION variable as follows:

source .venv/bin/activate
PYTHON_VERSION=3.13 ./scripts/unit-tests.sh

Test Configuration

  • Test Discovery: Tests are automatically discovered in the tests/unit/ directory
  • Async Support: Tests use --asyncio-mode=auto for automatic async test handling
  • Coverage: Tests generate coverage reports in htmlcov/ directory
  • Python Version: Defaults to Python 3.12, but can be overridden with PYTHON_VERSION environment variable

Coverage Reports

After running tests, you can view coverage reports:

# Open HTML coverage report in browser
open htmlcov/index.html  # macOS
xdg-open htmlcov/index.html  # Linux
start htmlcov/index.html  # Windows