mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 19:24:27 +00:00
* perf: move writing key to cache, to background task * perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils adds 200ms on calls with pgdb connected * fix(litellm_pre_call_utils.py'): rename call_type to actual call used * perf(proxy_server.py): remove db logic from _get_config_from_file was causing db calls to occur on every llm request, if team_id was set on key * fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db reduces latency/call by ~100ms * fix(proxy_server.py): minor fix on existing_settings not incl alerting * fix(exception_mapping_utils.py): map databricks exception string * fix(auth_checks.py): fix auth check logic * test: correctly mark flaky test * fix(utils.py): handle auth token error for tokenizers.from_pretrained |
||
---|---|---|
.. | ||
__init__.py | ||
base_cache.py | ||
caching.py | ||
caching_handler.py | ||
disk_cache.py | ||
dual_cache.py | ||
in_memory_cache.py | ||
qdrant_semantic_cache.py | ||
Readme.md | ||
redis_cache.py | ||
redis_semantic_cache.py | ||
s3_cache.py |
Caching on LiteLLM
LiteLLM supports multiple caching mechanisms. This allows users to choose the most suitable caching solution for their use case.
The following caching mechanisms are supported:
- RedisCache
- RedisSemanticCache
- QdrantSemanticCache
- InMemoryCache
- DiskCache
- S3Cache
- DualCache (updates both Redis and an in-memory cache simultaneously)
Folder Structure
litellm/caching/
├── base_cache.py
├── caching.py
├── caching_handler.py
├── disk_cache.py
├── dual_cache.py
├── in_memory_cache.py
├── qdrant_semantic_cache.py
├── redis_cache.py
├── redis_semantic_cache.py
├── s3_cache.py