litellm-mirror/litellm/caching
Ishaan Jaff 8a235e7d38
(Refactor / QA) - Use LoggingCallbackManager to append callbacks and ensure no duplicate callbacks are added (#8112)
* LoggingCallbackManager

* add logging_callback_manager

* use logging_callback_manager

* add add_litellm_failure_callback

* use add_litellm_callback

* use add_litellm_async_success_callback

* add_litellm_async_failure_callback

* linting fix

* fix logging callback manager

* test_duplicate_multiple_loggers_test

* use _reset_all_callbacks

* fix testing with dup callbacks

* test_basic_image_generation

* reset callbacks for tests

* fix check for _add_custom_logger_to_list

* fix test_amazing_sync_embedding

* fix _get_custom_logger_key

* fix batches testing

* fix _reset_all_callbacks

* fix _check_callback_list_size

* add callback_manager_test

* fix test gemini-2.0-flash-thinking-exp-01-21
2025-01-30 19:35:50 -08:00
..
__init__.py (testing) add unit tests for LLMCachingHandler Class (#6279) 2024-10-17 19:12:57 +05:30
_internal_lru_cache.py (litellm SDK perf improvements) - handle cases when unable to lookup model in model cost map (#7750) 2025-01-13 19:58:46 -08:00
base_cache.py LiteLLM Minor Fixes & Improvements (11/12/2024) (#6705) 2024-11-12 22:50:51 +05:30
caching.py (Refactor / QA) - Use LoggingCallbackManager to append callbacks and ensure no duplicate callbacks are added (#8112) 2025-01-30 19:35:50 -08:00
caching_handler.py fix 1 - latency fix (#7655) 2025-01-09 15:57:05 -08:00
disk_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
dual_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
in_memory_cache.py Provider Budget Routing - Get Budget, Spend Details (#7063) 2024-12-06 21:14:12 -08:00
qdrant_semantic_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
Readme.md (refactor) - caching use separate files for each cache class (#6251) 2024-10-16 13:17:21 +05:30
redis_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
redis_semantic_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
s3_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00

Caching on LiteLLM

LiteLLM supports multiple caching mechanisms. This allows users to choose the most suitable caching solution for their use case.

The following caching mechanisms are supported:

  1. RedisCache
  2. RedisSemanticCache
  3. QdrantSemanticCache
  4. InMemoryCache
  5. DiskCache
  6. S3Cache
  7. DualCache (updates both Redis and an in-memory cache simultaneously)

Folder Structure

litellm/caching/
├── base_cache.py
├── caching.py
├── caching_handler.py
├── disk_cache.py
├── dual_cache.py
├── in_memory_cache.py
├── qdrant_semantic_cache.py
├── redis_cache.py
├── redis_semantic_cache.py
├── s3_cache.py

Documentation