litellm-mirror/litellm/caching
Krish Dholakia 54b7f17ca6
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
fix(proxy_server.py): fix setting router redis cache, if cache enable… (#8859)
* fix(proxy_server.py): fix setting router redis cache, if cache enabled on litellm_settings

enables configurations like namespace to just work

* fix(redis_cache.py): fix key for async increment, to use the set namespace

prevents collisions if redis instance shared across environments

* fix load tests on litellm release notes

* fix caching on main branch (#8858)

* fix(streaming_handler.py): fix is delta empty check to handle empty str

* fix(streaming_handler.py): fix delta chunk on final response

* [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860)

* fix deepseek error

* test_deepseek_provider_async_completion

* fix get_complete_url

* bump: version 1.61.17 → 1.61.18

* bump: version 1.61.18 → 1.61.19

* vertex ai anthropic thinking param support (#8853)

* fix(vertex_llm_base.py): handle credentials passed in as dictionary

* fix(router.py): support vertex credentials as json dict

* test(test_vertex.py): allows easier testing

mock anthropic thinking response for vertex ai

* test(vertex_ai_partner_models/): don't remove "@" from model

breaks anthropic cost calculation

* test: move testing

* fix: fix linting error

* fix: fix linting error

* fix(vertex_ai_partner_models/main.py): split @ for codestral model

* test: fix test

* fix: fix stripping "@" on mistral models

* fix: fix test

* test: fix test

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2025-03-02 08:39:06 -08:00
..
__init__.py (Redis Cluster) - Fixes for using redis cluster + pipeline (#8442) 2025-02-12 18:01:32 -08:00
_internal_lru_cache.py (litellm SDK perf improvements) - handle cases when unable to lookup model in model cost map (#7750) 2025-01-13 19:58:46 -08:00
base_cache.py LiteLLM Minor Fixes & Improvements (11/12/2024) (#6705) 2024-11-12 22:50:51 +05:30
caching.py (Bug fix) - don't log messages in model_parameters in StandardLoggingPayload (#8932) 2025-03-01 13:39:45 -08:00
caching_handler.py fix 1 - latency fix (#7655) 2025-01-09 15:57:05 -08:00
disk_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
dual_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
in_memory_cache.py Provider Budget Routing - Get Budget, Spend Details (#7063) 2024-12-06 21:14:12 -08:00
qdrant_semantic_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
Readme.md (refactor) - caching use separate files for each cache class (#6251) 2024-10-16 13:17:21 +05:30
redis_cache.py fix(proxy_server.py): fix setting router redis cache, if cache enable… (#8859) 2025-03-02 08:39:06 -08:00
redis_cluster_cache.py (Redis fix) - use mget_non_atomic (#8682) 2025-02-20 17:51:31 -08:00
redis_semantic_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
s3_cache.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00

Caching on LiteLLM

LiteLLM supports multiple caching mechanisms. This allows users to choose the most suitable caching solution for their use case.

The following caching mechanisms are supported:

  1. RedisCache
  2. RedisSemanticCache
  3. QdrantSemanticCache
  4. InMemoryCache
  5. DiskCache
  6. S3Cache
  7. DualCache (updates both Redis and an in-memory cache simultaneously)

Folder Structure

litellm/caching/
├── base_cache.py
├── caching.py
├── caching_handler.py
├── disk_cache.py
├── dual_cache.py
├── in_memory_cache.py
├── qdrant_semantic_cache.py
├── redis_cache.py
├── redis_semantic_cache.py
├── s3_cache.py

Documentation