litellm-mirror/litellm/caching
Krish Dholakia cc19a9f6a1 Litellm dev 11 02 2024 (#6561)
* fix(dual_cache.py): update in-memory check for redis batch get cache

Fixes latency delay for async_batch_redis_cache

* fix(service_logger.py): fix race condition causing otel service logging to be overwritten if service_callbacks set

* feat(user_api_key_auth.py): add parent otel component for auth

allows us to isolate how much latency is added by auth checks

* perf(parallel_request_limiter.py): move async_set_cache_pipeline (from max parallel request limiter) out of execution path (background task)

reduces latency by 200ms

* feat(user_api_key_auth.py): have user api key auth object return user tpm/rpm limits - reduces redis calls in downstream task (parallel_request_limiter)

Reduces latency by 400-800ms

* fix(parallel_request_limiter.py): use batch get cache to reduce user/key/team usage object calls

reduces latency by 50-100ms

* fix: fix linting error

* fix(_service_logger.py): fix import

* fix(user_api_key_auth.py): fix service logging

* fix(dual_cache.py): don't pass 'self'

* fix: fix python3.8 error

* fix: fix init]
2024-11-04 07:48:20 +05:30
..
__init__.py (testing) add unit tests for LLMCachingHandler Class (#6279) 2024-10-17 19:12:57 +05:30
base_cache.py Litellm dev 10 29 2024 (#6502) 2024-10-29 22:04:16 -07:00
caching.py (perf) Litellm redis router fix - ~100ms improvement (#6483) 2024-10-29 13:58:29 -07:00
caching_handler.py (refactor) get_cache_key to be under 100 LOC function (#6327) 2024-10-19 15:21:11 +05:30
disk_cache.py redis otel tracing + async support for latency routing (#6452) 2024-10-28 21:52:12 -07:00
dual_cache.py Litellm dev 11 02 2024 (#6561) 2024-11-04 07:48:20 +05:30
in_memory_cache.py (Testing) Add unit testing for DualCache - ensure in memory cache is used when expected (#6471) 2024-10-29 08:42:57 +05:30
qdrant_semantic_cache.py (code quality) add ruff check PLR0915 for too-many-statements (#6309) 2024-10-18 15:36:49 +05:30
Readme.md (refactor) - caching use separate files for each cache class (#6251) 2024-10-16 13:17:21 +05:30
redis_cache.py Litellm dev 11 02 2024 (#6561) 2024-11-04 07:48:20 +05:30
redis_semantic_cache.py (refactor) - caching use separate files for each cache class (#6251) 2024-10-16 13:17:21 +05:30
s3_cache.py (refactor) - caching use separate files for each cache class (#6251) 2024-10-16 13:17:21 +05:30

Caching on LiteLLM

LiteLLM supports multiple caching mechanisms. This allows users to choose the most suitable caching solution for their use case.

The following caching mechanisms are supported:

  1. RedisCache
  2. RedisSemanticCache
  3. QdrantSemanticCache
  4. InMemoryCache
  5. DiskCache
  6. S3Cache
  7. DualCache (updates both Redis and an in-memory cache simultaneously)

Folder Structure

litellm/caching/
├── base_cache.py
├── caching.py
├── caching_handler.py
├── disk_cache.py
├── dual_cache.py
├── in_memory_cache.py
├── qdrant_semantic_cache.py
├── redis_cache.py
├── redis_semantic_cache.py
├── s3_cache.py

Documentation