Commit graph

79 commits

Author SHA1 Message Date
Krrish Dholakia
e46a27a4a7 feat(proxy_server.py): support batch writing failed spend logs
87.38% improvement in spend logging reliability
2024-02-07 19:31:14 -08:00
ishaan-jaff
3d0ece828a (feat) show semantic-cache on health/readiness 2024-02-06 13:35:34 -08:00
ishaan-jaff
05f379234d allow setting redis_semantic cache_embedding model 2024-02-06 10:22:02 -08:00
ishaan-jaff
751fb1af89 (feat) log semantic_sim to langfuse 2024-02-06 09:31:57 -08:00
ishaan-jaff
6249a97098 (feat) working semantic-cache on litellm proxy 2024-02-06 08:52:57 -08:00
ishaan-jaff
76def20ffe (feat) RedisSemanticCache - async 2024-02-06 08:13:12 -08:00
ishaan-jaff
ccc94128d3 (fix) semantic cache 2024-02-05 18:25:22 -08:00
ishaan-jaff
1b39454a08 (feat) working - sync semantic caching 2024-02-05 17:58:12 -08:00
ishaan-jaff
d4a799a3ca (feat )add semantic cache 2024-02-05 12:28:21 -08:00
Krish Dholakia
9ab59045a3
Merge branch 'main' into litellm_embedding_caching_updates 2024-02-03 18:08:47 -08:00
ishaan-jaff
4d6ffe4400 (feat) - cache - add delete cache 2024-02-02 18:36:51 -08:00
Krrish Dholakia
3e5b743b89 fix(caching.py): add logging module support for caching 2024-01-20 17:34:29 -08:00
Duarte OC
daa399bc60
adds s3 folder prefix to cache 2024-01-18 21:57:47 +01:00
Krrish Dholakia
3c02ad8b95 fix(utils.py): exclude s3 caching from individual item caching for embedding list
can't bulk upload to s3, so this will slow down calls

https://github.com/BerriAI/litellm/pull/1417
2024-01-13 16:19:30 +05:30
Krrish Dholakia
40c952f7c2 fix(caching.py): fix async in-memory caching 2024-01-13 15:33:57 +05:30
Krrish Dholakia
7f83cca62c fix(caching.py): return updated kwargs from get_cache helper function 2024-01-13 15:04:34 +05:30
Krrish Dholakia
c43a141889 fix(caching.py): remove print verbose statement 2024-01-13 14:11:05 +05:30
Krrish Dholakia
01df37d8cf fix(caching.py): use bulk writes and blockconnectionpooling for reads from Redis 2024-01-13 11:50:50 +05:30
Krrish Dholakia
007870390d fix: support async redis caching 2024-01-12 21:46:41 +05:30
Krrish Dholakia
df9df7b040 fix: n 2024-01-11 16:30:05 +05:30
David Manouchehri
8a07476524
(caching) Fix incorrect usage of str, which created invalid JSON. 2024-01-09 14:21:41 -05:00
Ishaan Jaff
5cfcd42763
Merge pull request #1311 from Manouchehri/patch-5
(caching) improve s3 backend
2024-01-08 09:47:57 +05:30
David Manouchehri
56b03732ae
(caching) Set Content-Disposition header and Content-Language 2024-01-07 12:21:15 -05:00
Krrish Dholakia
b0827a87b2 fix(caching.py): support s-maxage param for cache controls 2024-01-04 11:41:23 +05:30
David Manouchehri
c54e0813b4
(caching) improve s3 backend by specifying cache-control and content-type 2024-01-03 13:44:28 -05:00
Krrish Dholakia
f2da345173 fix(caching.py): handle cached_response being a dict not json string 2024-01-03 17:29:27 +05:30
ishaan-jaff
58ce5d44ae (feat) s3 cache support all boto3 params 2024-01-03 15:42:23 +05:30
ishaan-jaff
00364da993 (feat) add s3 Bucket as Cache 2024-01-03 15:13:43 +05:30
Krrish Dholakia
8cee267a5b fix(caching.py): support ttl, s-max-age, and no-cache cache controls
https://github.com/BerriAI/litellm/issues/1306
2024-01-03 12:42:43 +05:30
ishaan-jaff
cc7b964433 (docs) add litellm.cache docstring 2023-12-30 20:04:08 +05:30
ishaan-jaff
70cdc16d6f (feat) cache context manager - update cache 2023-12-30 19:50:53 +05:30
ishaan-jaff
ddddfe6602 (feat) add cache context manager 2023-12-30 19:32:51 +05:30
Krrish Dholakia
69935db239 fix(router.py): periodically re-initialize azure/openai clients to solve max conn issue 2023-12-30 15:48:34 +05:30
Krrish Dholakia
1e07f0fce8 fix(caching.py): hash the cache key to prevent key too long errors 2023-12-29 15:03:33 +05:30
Krrish Dholakia
4905929de3 refactor: add black formatting 2023-12-25 14:11:20 +05:30
Krrish Dholakia
84ad9f441e feat(router.py): support caching groups 2023-12-15 21:45:51 -08:00
ishaan-jaff
4b3ef49d60 (fix) linting 2023-12-14 22:35:29 +05:30
ishaan-jaff
9ee16bc962 (feat) caching - add supported call types 2023-12-14 22:27:14 +05:30
ishaan-jaff
008df34ddc (feat) use async_cache for acompletion/aembedding 2023-12-14 16:04:45 +05:30
ishaan-jaff
a8e12661c2 (fix) caching - get_cache_key - dont use set 2023-12-14 14:09:24 +05:30
Krrish Dholakia
8d688b6217 fix(utils.py): support caching for embedding + log cache hits
n

n
2023-12-13 18:37:30 -08:00
ishaan-jaff
ee3c9d19a2 (feat) caching + stream - bedrock 2023-12-11 08:43:50 -08:00
ishaan-jaff
2879b36636 (fix) setting cache keys 2023-12-09 16:42:59 -08:00
ishaan-jaff
60bf552fe8 (fix) caching + proxy - use model group 2023-12-09 15:40:22 -08:00
ishaan-jaff
67c730e264 (feat) async + stream cache 2023-12-09 14:22:10 -08:00
Krrish Dholakia
4bf875d3ed fix(router.py): fix least-busy routing 2023-12-08 20:29:49 -08:00
ishaan-jaff
04ec363788 (test) redis cache 2023-12-08 19:14:46 -08:00
ishaan-jaff
6e8ad10991 (feat) caching - streaming caching support 2023-12-08 11:50:37 -08:00
ishaan-jaff
9b0afbe2cb (fix) bug - caching: gen cache key in order 2023-12-08 11:50:37 -08:00
ishaan-jaff
762f28e4d7 (fix) make print_verbose non blocking 2023-12-07 17:31:32 -08:00