Commit graph

172 commits

Author SHA1 Message Date
ishaan-jaff
2955f8ed39 (feat) working - sync semantic caching 2024-02-05 17:58:12 -08:00
ishaan-jaff
1689d5790f (feat )add semantic cache 2024-02-05 12:28:21 -08:00
Krish Dholakia
45cbb3cf3d Merge branch 'main' into litellm_embedding_caching_updates 2024-02-03 18:08:47 -08:00
ishaan-jaff
755a43f27e (feat) - cache - add delete cache 2024-02-02 18:36:51 -08:00
Krrish Dholakia
a5afe5bf41 fix(caching.py): add logging module support for caching 2024-01-20 17:34:29 -08:00
Duarte OC
440ee81504 adds s3 folder prefix to cache 2024-01-18 21:57:47 +01:00
Krrish Dholakia
f08bb7e41f fix(utils.py): exclude s3 caching from individual item caching for embedding list
can't bulk upload to s3, so this will slow down calls

https://github.com/BerriAI/litellm/pull/1417
2024-01-13 16:19:30 +05:30
Krrish Dholakia
79cc739b53 fix(caching.py): fix async in-memory caching 2024-01-13 15:33:57 +05:30
Krrish Dholakia
cdadac1649 fix(caching.py): return updated kwargs from get_cache helper function 2024-01-13 15:04:34 +05:30
Krrish Dholakia
0182dee42b fix(caching.py): remove print verbose statement 2024-01-13 14:11:05 +05:30
Krrish Dholakia
880f829013 fix(caching.py): use bulk writes and blockconnectionpooling for reads from Redis 2024-01-13 11:50:50 +05:30
Krrish Dholakia
813fb19620 fix: support async redis caching 2024-01-12 21:46:41 +05:30
Krrish Dholakia
1472dc3f54 fix: n 2024-01-11 16:30:05 +05:30
David Manouchehri
7bbfad5841 (caching) Fix incorrect usage of str, which created invalid JSON. 2024-01-09 14:21:41 -05:00
Ishaan Jaff
e446fd9efb Merge pull request #1311 from Manouchehri/patch-5
(caching) improve s3 backend
2024-01-08 09:47:57 +05:30
David Manouchehri
4d30e672d7 (caching) Set Content-Disposition header and Content-Language 2024-01-07 12:21:15 -05:00
Krrish Dholakia
a5bac8ae15 fix(caching.py): support s-maxage param for cache controls 2024-01-04 11:41:23 +05:30
David Manouchehri
093174c7f5 (caching) improve s3 backend by specifying cache-control and content-type 2024-01-03 13:44:28 -05:00
Krrish Dholakia
4094356d6f fix(caching.py): handle cached_response being a dict not json string 2024-01-03 17:29:27 +05:30
ishaan-jaff
077786eed8 (feat) s3 cache support all boto3 params 2024-01-03 15:42:23 +05:30
ishaan-jaff
b78be33741 (feat) add s3 Bucket as Cache 2024-01-03 15:13:43 +05:30
Krrish Dholakia
176af67aac fix(caching.py): support ttl, s-max-age, and no-cache cache controls
https://github.com/BerriAI/litellm/issues/1306
2024-01-03 12:42:43 +05:30
ishaan-jaff
07706f42fe (docs) add litellm.cache docstring 2023-12-30 20:04:08 +05:30
ishaan-jaff
6f1f40ef58 (feat) cache context manager - update cache 2023-12-30 19:50:53 +05:30
ishaan-jaff
e8bebb2e14 (feat) add cache context manager 2023-12-30 19:32:51 +05:30
Krrish Dholakia
2cea8b0e83 fix(router.py): periodically re-initialize azure/openai clients to solve max conn issue 2023-12-30 15:48:34 +05:30
Krrish Dholakia
3c50177314 fix(caching.py): hash the cache key to prevent key too long errors 2023-12-29 15:03:33 +05:30
Krrish Dholakia
79978c44ba refactor: add black formatting 2023-12-25 14:11:20 +05:30
Krrish Dholakia
e76ed6be7d feat(router.py): support caching groups 2023-12-15 21:45:51 -08:00
ishaan-jaff
a848998f80 (fix) linting 2023-12-14 22:35:29 +05:30
ishaan-jaff
072fdac48c (feat) caching - add supported call types 2023-12-14 22:27:14 +05:30
ishaan-jaff
25efd43551 (feat) use async_cache for acompletion/aembedding 2023-12-14 16:04:45 +05:30
ishaan-jaff
8cf5767cca (fix) caching - get_cache_key - dont use set 2023-12-14 14:09:24 +05:30
Krrish Dholakia
853508e8c0 fix(utils.py): support caching for embedding + log cache hits
n

n
2023-12-13 18:37:30 -08:00
ishaan-jaff
33d6b5206d (feat) caching + stream - bedrock 2023-12-11 08:43:50 -08:00
ishaan-jaff
af32ba418e (fix) setting cache keys 2023-12-09 16:42:59 -08:00
ishaan-jaff
1eedd88760 (fix) caching + proxy - use model group 2023-12-09 15:40:22 -08:00
ishaan-jaff
914b7298c5 (feat) async + stream cache 2023-12-09 14:22:10 -08:00
Krrish Dholakia
a65c8919fc fix(router.py): fix least-busy routing 2023-12-08 20:29:49 -08:00
ishaan-jaff
bac49f0e11 (test) redis cache 2023-12-08 19:14:46 -08:00
ishaan-jaff
e430255794 (feat) caching - streaming caching support 2023-12-08 11:50:37 -08:00
ishaan-jaff
991d37f10e (fix) bug - caching: gen cache key in order 2023-12-08 11:50:37 -08:00
ishaan-jaff
f744445db4 (fix) make print_verbose non blocking 2023-12-07 17:31:32 -08:00
Krrish Dholakia
94abb14b99 fix(_redis.py): support additional params for redis 2023-12-05 12:16:51 -08:00
ishaan-jaff
b95d3acb12 (feat) init redis cache with **kwargs 2023-12-04 20:50:08 -08:00
Krrish Dholakia
6f40fd8ee2 fix(proxy_server.py): fix linting issues 2023-11-24 11:39:01 -08:00
ishaan-jaff
48f1ad05a3 (fix) caching use model, messages, temp, max_tokens as cache_key 2023-11-23 20:56:41 -08:00
Krrish Dholakia
3a8d7ec835 fix(router.py): add modelgroup to call metadata 2023-11-23 20:55:49 -08:00
ishaan-jaff
95b0b904cf (feat) caching: Use seed, max_tokens etc in cache key 2023-11-23 18:17:12 -08:00
ishaan-jaff
6934c08785 (chore) remove bloat: deprecated api.litellm cache 2023-11-23 17:20:22 -08:00