Krrish Dholakia
|
a5bac8ae15
|
fix(caching.py): support s-maxage param for cache controls
|
2024-01-04 11:41:23 +05:30 |
|
David Manouchehri
|
093174c7f5
|
(caching) improve s3 backend by specifying cache-control and content-type
|
2024-01-03 13:44:28 -05:00 |
|
Krrish Dholakia
|
4094356d6f
|
fix(caching.py): handle cached_response being a dict not json string
|
2024-01-03 17:29:27 +05:30 |
|
ishaan-jaff
|
077786eed8
|
(feat) s3 cache support all boto3 params
|
2024-01-03 15:42:23 +05:30 |
|
ishaan-jaff
|
b78be33741
|
(feat) add s3 Bucket as Cache
|
2024-01-03 15:13:43 +05:30 |
|
Krrish Dholakia
|
176af67aac
|
fix(caching.py): support ttl, s-max-age, and no-cache cache controls
https://github.com/BerriAI/litellm/issues/1306
|
2024-01-03 12:42:43 +05:30 |
|
ishaan-jaff
|
07706f42fe
|
(docs) add litellm.cache docstring
|
2023-12-30 20:04:08 +05:30 |
|
ishaan-jaff
|
6f1f40ef58
|
(feat) cache context manager - update cache
|
2023-12-30 19:50:53 +05:30 |
|
ishaan-jaff
|
e8bebb2e14
|
(feat) add cache context manager
|
2023-12-30 19:32:51 +05:30 |
|
Krrish Dholakia
|
2cea8b0e83
|
fix(router.py): periodically re-initialize azure/openai clients to solve max conn issue
|
2023-12-30 15:48:34 +05:30 |
|
Krrish Dholakia
|
3c50177314
|
fix(caching.py): hash the cache key to prevent key too long errors
|
2023-12-29 15:03:33 +05:30 |
|
Krrish Dholakia
|
79978c44ba
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
|
Krrish Dholakia
|
e76ed6be7d
|
feat(router.py): support caching groups
|
2023-12-15 21:45:51 -08:00 |
|
ishaan-jaff
|
a848998f80
|
(fix) linting
|
2023-12-14 22:35:29 +05:30 |
|
ishaan-jaff
|
072fdac48c
|
(feat) caching - add supported call types
|
2023-12-14 22:27:14 +05:30 |
|
ishaan-jaff
|
25efd43551
|
(feat) use async_cache for acompletion/aembedding
|
2023-12-14 16:04:45 +05:30 |
|
ishaan-jaff
|
8cf5767cca
|
(fix) caching - get_cache_key - dont use set
|
2023-12-14 14:09:24 +05:30 |
|
Krrish Dholakia
|
853508e8c0
|
fix(utils.py): support caching for embedding + log cache hits
n
n
|
2023-12-13 18:37:30 -08:00 |
|
ishaan-jaff
|
33d6b5206d
|
(feat) caching + stream - bedrock
|
2023-12-11 08:43:50 -08:00 |
|
ishaan-jaff
|
af32ba418e
|
(fix) setting cache keys
|
2023-12-09 16:42:59 -08:00 |
|
ishaan-jaff
|
1eedd88760
|
(fix) caching + proxy - use model group
|
2023-12-09 15:40:22 -08:00 |
|
ishaan-jaff
|
914b7298c5
|
(feat) async + stream cache
|
2023-12-09 14:22:10 -08:00 |
|
Krrish Dholakia
|
a65c8919fc
|
fix(router.py): fix least-busy routing
|
2023-12-08 20:29:49 -08:00 |
|
ishaan-jaff
|
bac49f0e11
|
(test) redis cache
|
2023-12-08 19:14:46 -08:00 |
|
ishaan-jaff
|
e430255794
|
(feat) caching - streaming caching support
|
2023-12-08 11:50:37 -08:00 |
|
ishaan-jaff
|
991d37f10e
|
(fix) bug - caching: gen cache key in order
|
2023-12-08 11:50:37 -08:00 |
|
ishaan-jaff
|
f744445db4
|
(fix) make print_verbose non blocking
|
2023-12-07 17:31:32 -08:00 |
|
Krrish Dholakia
|
94abb14b99
|
fix(_redis.py): support additional params for redis
|
2023-12-05 12:16:51 -08:00 |
|
ishaan-jaff
|
b95d3acb12
|
(feat) init redis cache with **kwargs
|
2023-12-04 20:50:08 -08:00 |
|
Krrish Dholakia
|
6f40fd8ee2
|
fix(proxy_server.py): fix linting issues
|
2023-11-24 11:39:01 -08:00 |
|
ishaan-jaff
|
48f1ad05a3
|
(fix) caching use model, messages, temp, max_tokens as cache_key
|
2023-11-23 20:56:41 -08:00 |
|
Krrish Dholakia
|
3a8d7ec835
|
fix(router.py): add modelgroup to call metadata
|
2023-11-23 20:55:49 -08:00 |
|
ishaan-jaff
|
95b0b904cf
|
(feat) caching: Use seed, max_tokens etc in cache key
|
2023-11-23 18:17:12 -08:00 |
|
ishaan-jaff
|
6934c08785
|
(chore) remove bloat: deprecated api.litellm cache
|
2023-11-23 17:20:22 -08:00 |
|
Krrish Dholakia
|
c290be69f6
|
fix(caching.py): fix linting issues
|
2023-11-23 13:21:45 -08:00 |
|
Krrish Dholakia
|
0e3064ac8c
|
fix(router.py): fix caching for tracking cooldowns + usage
|
2023-11-23 11:13:32 -08:00 |
|
Krrish Dholakia
|
497419a766
|
fix(router.py): add support for cooldowns with redis
|
2023-11-22 19:54:22 -08:00 |
|
Krrish Dholakia
|
8ba438b3a2
|
fix(caching.py): dump model response object as json
|
2023-11-13 10:41:04 -08:00 |
|
Krrish Dholakia
|
d0b23a2722
|
refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions
|
2023-11-04 12:50:15 -07:00 |
|
Krrish Dholakia
|
53cb0f3974
|
fix(caching.py): fixing pr issues
|
2023-10-31 18:32:40 -07:00 |
|
seva
|
f0a9f8c61e
|
Router & Caching fixes:
- Add optional TTL to Cache parameters
- Fix tpm and rpm caching in Router
|
2023-10-30 13:29:35 +01:00 |
|
ishaan-jaff
|
2a3da7e43c
|
(feat) Redis caching print exception statements when it fails
|
2023-10-25 11:29:48 -07:00 |
|
Krrish Dholakia
|
204218508d
|
fix(router.py): completing redis support work for router
|
2023-10-18 12:13:00 -07:00 |
|
ishaan-jaff
|
7708d45374
|
(feat) add docstring for caching
|
2023-10-14 16:08:42 -07:00 |
|
ishaan-jaff
|
7d9096ce37
|
add hosted api.litellm.ai for caching
|
2023-10-02 10:27:18 -07:00 |
|
Krrish Dholakia
|
7cf5be98a2
|
add contributor message to code
|
2023-09-25 10:00:10 -07:00 |
|
ishaan-jaff
|
da87721a20
|
caching updates
|
2023-09-08 18:06:47 -07:00 |
|
ishaan-jaff
|
ee68836684
|
fix redis caching
|
2023-08-28 22:10:15 -07:00 |
|
ishaan-jaff
|
309e5e7046
|
with new caching
|
2023-08-28 21:57:00 -07:00 |
|
ishaan-jaff
|
d26df210f6
|
add streaming_caching support
|
2023-08-28 19:17:53 -07:00 |
|