mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 18:54:30 +00:00
* fix(caching_handler.py): handle positional arguments in add cache logic Fixes https://github.com/BerriAI/litellm/issues/6264 * feat(litellm_pre_call_utils.py): allow forwarding openai org id to backend client https://github.com/BerriAI/litellm/issues/6237 * docs(configs.md): add 'forward_openai_org_id' to docs * fix(proxy_server.py): return model info if user_model is set Fixes https://github.com/BerriAI/litellm/issues/6233 * fix(hosted_vllm/chat/transformation.py): don't set tools unless non-none * fix(openai.py): improve debug log for openai 'str' error Addresses https://github.com/BerriAI/litellm/issues/6272 * fix(proxy_server.py): fix linting error * fix(proxy_server.py): fix linting errors * test: skip WIP test * docs(openai.md): add docs on passing openai org id from client to openai |
||
---|---|---|
.. | ||
__init__.py | ||
base_cache.py | ||
caching.py | ||
caching_handler.py | ||
disk_cache.py | ||
dual_cache.py | ||
in_memory_cache.py | ||
qdrant_semantic_cache.py | ||
Readme.md | ||
redis_cache.py | ||
redis_semantic_cache.py | ||
s3_cache.py |
Caching on LiteLLM
LiteLLM supports multiple caching mechanisms. This allows users to choose the most suitable caching solution for their use case.
The following caching mechanisms are supported:
- RedisCache
- RedisSemanticCache
- QdrantSemanticCache
- InMemoryCache
- DiskCache
- S3Cache
- DualCache (updates both Redis and an in-memory cache simultaneously)
Folder Structure
litellm/caching/
├── base_cache.py
├── caching.py
├── caching_handler.py
├── disk_cache.py
├── dual_cache.py
├── in_memory_cache.py
├── qdrant_semantic_cache.py
├── redis_cache.py
├── redis_semantic_cache.py
├── s3_cache.py