mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 11:14:04 +00:00
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
* fix(proxy_server.py): fix setting router redis cache, if cache enabled on litellm_settings enables configurations like namespace to just work * fix(redis_cache.py): fix key for async increment, to use the set namespace prevents collisions if redis instance shared across environments * fix load tests on litellm release notes * fix caching on main branch (#8858) * fix(streaming_handler.py): fix is delta empty check to handle empty str * fix(streaming_handler.py): fix delta chunk on final response * [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860) * fix deepseek error * test_deepseek_provider_async_completion * fix get_complete_url * bump: version 1.61.17 → 1.61.18 * bump: version 1.61.18 → 1.61.19 * vertex ai anthropic thinking param support (#8853) * fix(vertex_llm_base.py): handle credentials passed in as dictionary * fix(router.py): support vertex credentials as json dict * test(test_vertex.py): allows easier testing mock anthropic thinking response for vertex ai * test(vertex_ai_partner_models/): don't remove "@" from model breaks anthropic cost calculation * test: move testing * fix: fix linting error * fix: fix linting error * fix(vertex_ai_partner_models/main.py): split @ for codestral model * test: fix test * fix: fix stripping "@" on mistral models * fix: fix test * test: fix test --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> |
||
---|---|---|
.. | ||
__init__.py | ||
_internal_lru_cache.py | ||
base_cache.py | ||
caching.py | ||
caching_handler.py | ||
disk_cache.py | ||
dual_cache.py | ||
in_memory_cache.py | ||
qdrant_semantic_cache.py | ||
Readme.md | ||
redis_cache.py | ||
redis_cluster_cache.py | ||
redis_semantic_cache.py | ||
s3_cache.py |
Caching on LiteLLM
LiteLLM supports multiple caching mechanisms. This allows users to choose the most suitable caching solution for their use case.
The following caching mechanisms are supported:
- RedisCache
- RedisSemanticCache
- QdrantSemanticCache
- InMemoryCache
- DiskCache
- S3Cache
- DualCache (updates both Redis and an in-memory cache simultaneously)
Folder Structure
litellm/caching/
├── base_cache.py
├── caching.py
├── caching_handler.py
├── disk_cache.py
├── dual_cache.py
├── in_memory_cache.py
├── qdrant_semantic_cache.py
├── redis_cache.py
├── redis_semantic_cache.py
├── s3_cache.py