Commit graph

60 commits

Author SHA1 Message Date
Duarte OC
daa399bc60
adds s3 folder prefix to cache 2024-01-18 21:57:47 +01:00
David Manouchehri
8a07476524
(caching) Fix incorrect usage of str, which created invalid JSON. 2024-01-09 14:21:41 -05:00
Ishaan Jaff
5cfcd42763
Merge pull request #1311 from Manouchehri/patch-5
(caching) improve s3 backend
2024-01-08 09:47:57 +05:30
David Manouchehri
56b03732ae
(caching) Set Content-Disposition header and Content-Language 2024-01-07 12:21:15 -05:00
Krrish Dholakia
b0827a87b2 fix(caching.py): support s-maxage param for cache controls 2024-01-04 11:41:23 +05:30
David Manouchehri
c54e0813b4
(caching) improve s3 backend by specifying cache-control and content-type 2024-01-03 13:44:28 -05:00
Krrish Dholakia
f2da345173 fix(caching.py): handle cached_response being a dict not json string 2024-01-03 17:29:27 +05:30
ishaan-jaff
58ce5d44ae (feat) s3 cache support all boto3 params 2024-01-03 15:42:23 +05:30
ishaan-jaff
00364da993 (feat) add s3 Bucket as Cache 2024-01-03 15:13:43 +05:30
Krrish Dholakia
8cee267a5b fix(caching.py): support ttl, s-max-age, and no-cache cache controls
https://github.com/BerriAI/litellm/issues/1306
2024-01-03 12:42:43 +05:30
ishaan-jaff
cc7b964433 (docs) add litellm.cache docstring 2023-12-30 20:04:08 +05:30
ishaan-jaff
70cdc16d6f (feat) cache context manager - update cache 2023-12-30 19:50:53 +05:30
ishaan-jaff
ddddfe6602 (feat) add cache context manager 2023-12-30 19:32:51 +05:30
Krrish Dholakia
69935db239 fix(router.py): periodically re-initialize azure/openai clients to solve max conn issue 2023-12-30 15:48:34 +05:30
Krrish Dholakia
1e07f0fce8 fix(caching.py): hash the cache key to prevent key too long errors 2023-12-29 15:03:33 +05:30
Krrish Dholakia
4905929de3 refactor: add black formatting 2023-12-25 14:11:20 +05:30
Krrish Dholakia
84ad9f441e feat(router.py): support caching groups 2023-12-15 21:45:51 -08:00
ishaan-jaff
4b3ef49d60 (fix) linting 2023-12-14 22:35:29 +05:30
ishaan-jaff
9ee16bc962 (feat) caching - add supported call types 2023-12-14 22:27:14 +05:30
ishaan-jaff
008df34ddc (feat) use async_cache for acompletion/aembedding 2023-12-14 16:04:45 +05:30
ishaan-jaff
a8e12661c2 (fix) caching - get_cache_key - dont use set 2023-12-14 14:09:24 +05:30
Krrish Dholakia
8d688b6217 fix(utils.py): support caching for embedding + log cache hits
n

n
2023-12-13 18:37:30 -08:00
ishaan-jaff
ee3c9d19a2 (feat) caching + stream - bedrock 2023-12-11 08:43:50 -08:00
ishaan-jaff
2879b36636 (fix) setting cache keys 2023-12-09 16:42:59 -08:00
ishaan-jaff
60bf552fe8 (fix) caching + proxy - use model group 2023-12-09 15:40:22 -08:00
ishaan-jaff
67c730e264 (feat) async + stream cache 2023-12-09 14:22:10 -08:00
Krrish Dholakia
4bf875d3ed fix(router.py): fix least-busy routing 2023-12-08 20:29:49 -08:00
ishaan-jaff
04ec363788 (test) redis cache 2023-12-08 19:14:46 -08:00
ishaan-jaff
6e8ad10991 (feat) caching - streaming caching support 2023-12-08 11:50:37 -08:00
ishaan-jaff
9b0afbe2cb (fix) bug - caching: gen cache key in order 2023-12-08 11:50:37 -08:00
ishaan-jaff
762f28e4d7 (fix) make print_verbose non blocking 2023-12-07 17:31:32 -08:00
Krrish Dholakia
88c95ca259 fix(_redis.py): support additional params for redis 2023-12-05 12:16:51 -08:00
ishaan-jaff
9ba17657ad (feat) init redis cache with **kwargs 2023-12-04 20:50:08 -08:00
Krrish Dholakia
2e8d582a34 fix(proxy_server.py): fix linting issues 2023-11-24 11:39:01 -08:00
ishaan-jaff
ca852e1dcd (fix) caching use model, messages, temp, max_tokens as cache_key 2023-11-23 20:56:41 -08:00
Krrish Dholakia
187403c5cc fix(router.py): add modelgroup to call metadata 2023-11-23 20:55:49 -08:00
ishaan-jaff
3660fb1f7f (feat) caching: Use seed, max_tokens etc in cache key 2023-11-23 18:17:12 -08:00
ishaan-jaff
69c6bbd50b (chore) remove bloat: deprecated api.litellm cache 2023-11-23 17:20:22 -08:00
Krrish Dholakia
87aa36a2ec fix(caching.py): fix linting issues 2023-11-23 13:21:45 -08:00
Krrish Dholakia
61fc76a8c4 fix(router.py): fix caching for tracking cooldowns + usage 2023-11-23 11:13:32 -08:00
Krrish Dholakia
5d5ca9f7ef fix(router.py): add support for cooldowns with redis 2023-11-22 19:54:22 -08:00
Krrish Dholakia
1665b872c3 fix(caching.py): dump model response object as json 2023-11-13 10:41:04 -08:00
Krrish Dholakia
6b40546e59 refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions 2023-11-04 12:50:15 -07:00
Krrish Dholakia
6ead8d8c18 fix(caching.py): fixing pr issues 2023-10-31 18:32:40 -07:00
seva
5e1e8820b4 Router & Caching fixes:
- Add optional TTL to Cache parameters
- Fix tpm and rpm caching in Router
2023-10-30 13:29:35 +01:00
ishaan-jaff
5bd1b3968e (feat) Redis caching print exception statements when it fails 2023-10-25 11:29:48 -07:00
Krrish Dholakia
e35562d188 fix(router.py): completing redis support work for router 2023-10-18 12:13:00 -07:00
ishaan-jaff
69065e9864 (feat) add docstring for caching 2023-10-14 16:08:42 -07:00
ishaan-jaff
6f6d5fae3a add hosted api.litellm.ai for caching 2023-10-02 10:27:18 -07:00
Krrish Dholakia
bdc6ef1df8 add contributor message to code 2023-09-25 10:00:10 -07:00