Commit graph

112 commits

Author SHA1 Message Date
Krrish Dholakia
b2e7866ea9 fix(caching.py): respect redis namespace for all redis get/set requests 2024-03-30 20:20:29 -07:00
Krrish Dholakia
555f0af027 fix(tpm_rpm_limiter.py): enable redis caching for tpm/rpm checks on keys/user/teams
allows tpm/rpm checks to work across instances

https://github.com/BerriAI/litellm/issues/2730
2024-03-30 20:01:36 -07:00
Ishaan Jaff
ee54bbcd89 (fix) undo changes from other branches 2024-03-26 09:22:19 -07:00
Ishaan Jaff
f1ebbd32b8 (feat) /cache/flushall 2024-03-26 09:18:58 -07:00
Ishaan Jaff
336fe2f876 (fix) in mem redis reads 2024-03-26 09:10:49 -07:00
Ishaan Jaff
c898ffe636 (feat) improve cache debugging litellm 2024-03-25 18:26:58 -07:00
Ishaan Jaff
412a56eea4 (fix) print verbose in batch writing redis 2024-03-25 18:02:31 -07:00
Ishaan Jaff
853ed0278f Merge branch 'main' into litellm_batch_write_redis_cache 2024-03-25 16:41:29 -07:00
Ishaan Jaff
ec0435bdea (feat) batch write redis cache output 2024-03-25 16:39:47 -07:00
Ishaan Jaff
c986842f26 (feat) v0 batch redis cache writes 2024-03-25 15:20:10 -07:00
Krrish Dholakia
fec92767bb fix(caching.py): support default ttl for caching 2024-03-25 13:40:17 -07:00
Krish Dholakia
e7ff074eab Merge pull request #2606 from BerriAI/litellm_jwt_auth_updates
fix(handle_jwt.py): track spend for user using jwt auth
2024-03-20 19:40:17 -07:00
Krrish Dholakia
f0d8472bfd fix(caching.py): enable async setting of cache for dual cache 2024-03-20 18:42:34 -07:00
Ishaan Jaff
9207a0bf13 (fix) self.redis_version issue 2024-03-20 10:36:08 -07:00
Ishaan Jaff
60a5e74352 (fix) redis 6.2 version incompatibility issue 2024-03-20 09:38:21 -07:00
Ishaan Jaff
e8f775ee04 (feat) litellm cache ping 2024-03-20 08:24:13 -07:00
Krrish Dholakia
dfcf16eb4d fix(caching.py): pass redis kwargs to connection pool init 2024-03-18 08:21:36 -07:00
Krrish Dholakia
27de1089a6 fix(caching.py): close redis connection pool upon proxy shutdown 2024-03-16 10:39:58 -07:00
Krrish Dholakia
45582d2fa5 test(test_caching.py): fix async tests 2024-03-15 18:09:25 -07:00
Krrish Dholakia
8d1c60bfdc feat(batch_redis_get.py): batch redis GET requests for a given key + call type
reduces the number of GET requests we're making in high-throughput scenarios
2024-03-15 14:40:11 -07:00
Krrish Dholakia
92abddbf8b fix(caching.py): support redis caching with namespaces 2024-03-14 13:35:17 -07:00
Krrish Dholakia
8d4b7b60bf fix(caching.py): fix print statements 2024-03-14 12:58:34 -07:00
Krrish Dholakia
3232feb123 fix(proxy_server.py): fix key caching logic 2024-03-13 19:10:24 -07:00
Krish Dholakia
774ceb741c Merge pull request #2426 from BerriAI/litellm_whisper_cost_tracking
feat: add cost tracking + caching for `/audio/transcription` calls
2024-03-09 19:12:06 -08:00
ishaan-jaff
64ae386f87 (feat) debug when in meory cache is used 2024-03-09 16:24:04 -08:00
Krrish Dholakia
2b50896b2d fix(caching.py): only add unique kwargs for transcription_only_kwargs in caching 2024-03-09 16:09:12 -08:00
Krrish Dholakia
b2ce963498 feat: add cost tracking + caching for transcription calls 2024-03-09 15:43:38 -08:00
Krrish Dholakia
12d663d693 fix(caching.py): add s3 path as a top-level param 2024-03-06 18:07:28 -08:00
Krrish Dholakia
8f8ac9d94e fix(utils.py): only return cached streaming object for streaming calls 2024-02-21 21:27:40 -08:00
Krrish Dholakia
96b8a141cc fix(caching.py): use print verbose for logging error 2024-02-15 18:12:09 -08:00
Krrish Dholakia
fe9180a39d fix(redis.py): fix instantiating redis client from url 2024-02-15 17:48:00 -08:00
ishaan-jaff
f68fd1c355 (fix) s3 cache proxy - fix notImplemented error 2024-02-13 16:34:43 -08:00
ishaan-jaff
f246ef2cf7 (fix) remove extra statement 2024-02-07 19:24:27 -08:00
ishaan-jaff
9a1db230ad (fix) track cost for semantic_caching, place on langfuse trace 2024-02-07 19:21:50 -08:00
ishaan-jaff
ab4e7f2be9 (feat) show semantic-cache on health/readiness 2024-02-06 13:35:34 -08:00
ishaan-jaff
3c71eb1e71 allow setting redis_semantic cache_embedding model 2024-02-06 10:22:02 -08:00
ishaan-jaff
617716752e (feat) log semantic_sim to langfuse 2024-02-06 09:31:57 -08:00
ishaan-jaff
0ddcebbf52 (feat) working semantic-cache on litellm proxy 2024-02-06 08:52:57 -08:00
ishaan-jaff
5be26109f5 (feat) RedisSemanticCache - async 2024-02-06 08:13:12 -08:00
ishaan-jaff
eaad671e40 (fix) semantic cache 2024-02-05 18:25:22 -08:00
ishaan-jaff
2955f8ed39 (feat) working - sync semantic caching 2024-02-05 17:58:12 -08:00
ishaan-jaff
1689d5790f (feat )add semantic cache 2024-02-05 12:28:21 -08:00
Krish Dholakia
45cbb3cf3d Merge branch 'main' into litellm_embedding_caching_updates 2024-02-03 18:08:47 -08:00
ishaan-jaff
755a43f27e (feat) - cache - add delete cache 2024-02-02 18:36:51 -08:00
Krrish Dholakia
a5afe5bf41 fix(caching.py): add logging module support for caching 2024-01-20 17:34:29 -08:00
Duarte OC
440ee81504 adds s3 folder prefix to cache 2024-01-18 21:57:47 +01:00
Krrish Dholakia
f08bb7e41f fix(utils.py): exclude s3 caching from individual item caching for embedding list
can't bulk upload to s3, so this will slow down calls

https://github.com/BerriAI/litellm/pull/1417
2024-01-13 16:19:30 +05:30
Krrish Dholakia
79cc739b53 fix(caching.py): fix async in-memory caching 2024-01-13 15:33:57 +05:30
Krrish Dholakia
cdadac1649 fix(caching.py): return updated kwargs from get_cache helper function 2024-01-13 15:04:34 +05:30
Krrish Dholakia
0182dee42b fix(caching.py): remove print verbose statement 2024-01-13 14:11:05 +05:30