llama-stack-mirror/tests/unit/providers/utils
William Caban 299c575daa feat(cache): add cache store abstraction layer
Implement reusable cache store abstraction with in-memory and Redis
backends as foundation for prompt caching feature (PR1 of progressive delivery).

- Add CacheStore protocol defining cache interface
- Implement MemoryCacheStore with LRU, LFU, and TTL-only eviction policies
- Implement RedisCacheStore with connection pooling and retry logic
- Add CircuitBreaker for cache backend failure protection
- Include comprehensive unit tests (55 tests, >80% coverage)
- Add dependencies: cachetools>=5.5.0, redis>=5.2.0

This abstraction enables flexible caching implementations for the prompt
caching middleware without coupling to specific storage backends.

Signed-by: William Caban <willliam.caban@gmail.com>
2025-11-15 14:55:18 -05:00
..
cache feat(cache): add cache store abstraction layer 2025-11-15 14:55:18 -05:00
inference fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
memory fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
__init__.py fix: add check for interleavedContent (#1973) 2025-05-06 09:55:07 -07:00
test_form_data.py fix(expires_after): make sure multipart/form-data is properly parsed (#3612) 2025-09-30 16:14:03 -04:00
test_model_registry.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
test_openai_compat_conversion.py feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
test_scheduler.py chore: default to pytest asyncio-mode=auto (#2730) 2025-07-11 13:00:24 -07:00