llama-stack-mirror/tests/unit/core
Ashwin Bharambe 3ecb043d59 fix(context): prevent provider data leak between streaming requests
The preserve_contexts_async_generator function was not cleaning up context
variables after streaming iterations, causing PROVIDER_DATA_VAR to leak
between sequential requests. Provider credentials or configuration from one
request could persist and leak into subsequent requests.

Root cause: Context variables were set at the start of each iteration but
never cleared afterward. When generators were consumed outside their original
context manager (after the with block exited), the context values remained
set indefinitely.

The fix clears context variables by setting them to None after each yield
and when the generator terminates. This works reliably across all scenarios
including when the library client wraps async generators for sync consumption
(which creates new asyncio Contexts per iteration). Direct value setting
avoids Context-scoped token issues that would occur with token-based reset.

Added unit and integration tests that verify context isolation.
2025-10-27 13:41:05 -07:00
..
routers chore: support default model in moderations API (#3890) 2025-10-23 16:03:53 -07:00
test_provider_data_context.py fix(context): prevent provider data leak between streaming requests 2025-10-27 13:41:05 -07:00
test_stack_validation.py chore: support default model in moderations API (#3890) 2025-10-23 16:03:53 -07:00
test_storage_references.py feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00