mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-24 18:24:20 +00:00
commit |
||
---|---|---|
.. | ||
__init__.py | ||
_internal_lru_cache.py | ||
base_cache.py | ||
caching.py | ||
caching_handler.py | ||
disk_cache.py | ||
dual_cache.py | ||
in_memory_cache.py | ||
llm_caching_handler.py | ||
qdrant_semantic_cache.py | ||
Readme.md | ||
redis_cache.py | ||
redis_cluster_cache.py | ||
redis_semantic_cache.py | ||
s3_cache.py |
Caching on LiteLLM
LiteLLM supports multiple caching mechanisms. This allows users to choose the most suitable caching solution for their use case.
The following caching mechanisms are supported:
- RedisCache
- RedisSemanticCache
- QdrantSemanticCache
- InMemoryCache
- DiskCache
- S3Cache
- DualCache (updates both Redis and an in-memory cache simultaneously)
Folder Structure
litellm/caching/
├── base_cache.py
├── caching.py
├── caching_handler.py
├── disk_cache.py
├── dual_cache.py
├── in_memory_cache.py
├── qdrant_semantic_cache.py
├── redis_cache.py
├── redis_semantic_cache.py
├── s3_cache.py