llama-stack-mirror/tests/unit
Roy Belio c574db5f1d
fix(inference): AttributeError in streaming response cleanup (#4236)
This PR fixes issue #3185 
The code calls `await event_gen.aclose()` but OpenAI's `AsyncStream`
doesn't have an `aclose()` method - it has `close()` (which is async).
when clients cancel streaming requests, the server tries to clean up
with:

```python
await event_gen.aclose()  #  AsyncStream doesn't have aclose()!
```

But `AsyncStream` has never had a public `aclose()` method. The error
message literally tells us:

```
AttributeError: 'AsyncStream' object has no attribute 'aclose'. Did you mean: 'close'?
                                                                            ^^^^^^^^
```

## Verification
* Reproduction script
[`reproduce_issue_3185.sh`](https://gist.github.com/r-bit-rry/dea4f8fbb81c446f5db50ea7abd6379b)
can be used to verify the fix.
*   Manual checks, validation against original OpenAI library code
2025-12-14 07:51:09 -05:00
..
cli feat: remove usage of build yaml (#4192) 2025-12-10 10:12:12 +01:00
conversations feat: remove usage of build yaml (#4192) 2025-12-10 10:12:12 +01:00
core feat: Add support for query rewrite in vector_store.search (#4171) 2025-12-10 10:06:19 -05:00
distribution feat: convert Benchmarks API to use FastAPI router (#4309) 2025-12-10 15:04:27 +01:00
files refactor(storage): make { kvstore, sqlstore } as llama stack "internal" APIs (#4181) 2025-11-18 13:15:16 -08:00
models refactor: remove dead inference API code and clean up imports (#4093) 2025-11-10 15:29:24 -08:00
prompts/prompts feat: remove usage of build yaml (#4192) 2025-12-10 10:12:12 +01:00
providers fix(inference): AttributeError in streaming response cleanup (#4236) 2025-12-14 07:51:09 -05:00
rag fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
registry refactor(storage): make { kvstore, sqlstore } as llama stack "internal" APIs (#4181) 2025-11-18 13:15:16 -08:00
server feat: remove usage of build yaml (#4192) 2025-12-10 10:12:12 +01:00
tools fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
utils fix(inference): respect table_name config in InferenceStore (#4371) 2025-12-11 14:50:23 +01:00
__init__.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
conftest.py test: suppress expected error logs in SSE test (#3886) 2025-10-22 14:34:32 -07:00
fixtures.py refactor(storage): make { kvstore, sqlstore } as llama stack "internal" APIs (#4181) 2025-11-18 13:15:16 -08:00
README.md test: Measure and track code coverage (#2636) 2025-07-18 18:08:36 +02:00

Llama Stack Unit Tests

Unit Tests

Unit tests verify individual components and functions in isolation. They are fast, reliable, and don't require external services.

Prerequisites

  1. Python Environment: Ensure you have Python 3.12+ installed
  2. uv Package Manager: Install uv if not already installed

You can run the unit tests by running:

./scripts/unit-tests.sh [PYTEST_ARGS]

Any additional arguments are passed to pytest. For example, you can specify a test directory, a specific test file, or any pytest flags (e.g., -vvv for verbosity). If no test directory is specified, it defaults to "tests/unit", e.g:

./scripts/unit-tests.sh tests/unit/registry/test_registry.py -vvv

If you'd like to run for a non-default version of Python (currently 3.12), pass PYTHON_VERSION variable as follows:

source .venv/bin/activate
PYTHON_VERSION=3.13 ./scripts/unit-tests.sh

Test Configuration

  • Test Discovery: Tests are automatically discovered in the tests/unit/ directory
  • Async Support: Tests use --asyncio-mode=auto for automatic async test handling
  • Coverage: Tests generate coverage reports in htmlcov/ directory
  • Python Version: Defaults to Python 3.12, but can be overridden with PYTHON_VERSION environment variable

Coverage Reports

After running tests, you can view coverage reports:

# Open HTML coverage report in browser
open htmlcov/index.html  # macOS
xdg-open htmlcov/index.html  # Linux
start htmlcov/index.html  # Windows