Commit graph

27 commits

Author SHA1 Message Date
Krrish Dholakia
3560f0ef2c refactor: move all testing to top-level of repo
Closes https://github.com/BerriAI/litellm/issues/486
2024-09-28 21:08:14 -07:00
Ishaan Jaff
58171f35ef
[Fix proxy perf] Use correct cache key when reading from redis cache (#5928)
* fix parallel request limiter use correct user id

* async def get_user_object(
fix

* use safe get_internal_user_object

* fix store internal users in redis correctly
2024-09-26 18:13:35 -07:00
Ishaan Jaff
7cbcf538c6
[Feat] Improve OTEL Tracking - Require all Redis Cache reads to be logged on OTEL (#5881)
* fix use previous internal usage caching logic

* fix test_dual_cache_uses_redis

* redis track event_metadata in service logging

* show otel error on _get_parent_otel_span_from_kwargs

* track parent otel span on internal usage cache

* update_request_status

* fix internal usage cache

* fix linting

* fix test internal usage cache

* fix linting error

* show event metadata in redis set

* fix test_get_team_redis

* fix test_get_team_redis

* test_proxy_logging_setup
2024-09-25 10:57:08 -07:00
Ishaan Jaff
3966c545b4 fix flaky tests 2024-08-27 22:47:11 -07:00
Ishaan Jaff
398295116f inly write model tpm/rpm tracking when user set it 2024-08-18 09:58:09 -07:00
Ishaan Jaff
653d2e6ce0 fix parallel request limiter tests 2024-08-17 14:21:59 -07:00
Ishaan Jaff
221e5b829b fix parallel request limiter 2024-08-17 14:14:12 -07:00
Ishaan Jaff
785482f023 feat add settings for rpm/tpm limits for a model 2024-08-17 09:16:01 -07:00
Ishaan Jaff
1ee33478c9 track rpm/tpm usage per key+model 2024-08-16 18:28:58 -07:00
Krrish Dholakia
fde434be66 feat(proxy_server.py): return 'retry-after' param for rate limited requests
Closes https://github.com/BerriAI/litellm/issues/4695
2024-07-13 17:15:20 -07:00
Krrish Dholakia
8d56f72d5a test(test_parallel_request_limiter.py): fix test 2024-06-13 17:13:44 -07:00
Krrish Dholakia
4408b717f0 fix(parallel_request_limiter.py): fix user+team tpm/rpm limit check
Closes https://github.com/BerriAI/litellm/issues/3788
2024-05-27 08:48:23 -07:00
Krrish Dholakia
594ca947c8 fix(parallel_request_limiter.py): fix max parallel request limiter on retries 2024-05-15 20:16:11 -07:00
Krrish Dholakia
473e667bdf test: refactor testing to handle the hash token fix 2024-04-17 17:31:39 -07:00
Krrish Dholakia
2163e43b9a test(test_parallel_request_limiter.py): add more verbose logging 2024-03-06 19:21:57 -08:00
Krrish Dholakia
f86ab19067 fix(parallel_request_limiter.py): fix team rate limit enforcement 2024-02-26 18:06:13 -08:00
ishaan-jaff
b5900099af (feat) tpm/rpm limit by User 2024-02-22 18:44:03 -08:00
Krrish Dholakia
b9393fb769 fix(test_parallel_request_limiter.py): use mock responses for streaming 2024-02-08 21:45:38 -08:00
ishaan-jaff
1489fd369b (fix) test_normal_router_tpm_limit 2024-02-06 12:44:30 -08:00
ishaan-jaff
47bed68c7f (fix) test_normal_router_tpm_limit 2024-02-06 06:46:49 -08:00
ishaan-jaff
c353161456 (fix) test_parallel limiter fix 2024-02-03 21:03:15 -08:00
Krrish Dholakia
3e35041758 test(test_parallel_request_limiter.py): fix test to handle minute changes 2024-02-03 20:39:31 -08:00
Krrish Dholakia
b8c2cbdd08 test(test_parallel_request_limiter.py): fix test import 2024-02-02 10:15:33 -08:00
Krrish Dholakia
bbe71c8375 fix(test_parallel_request_limiter): increase time limit for waiting for success logging event to happen 2024-01-30 13:26:17 -08:00
Krrish Dholakia
22c3321208 test(test_parallel_request_limiter.py): unit testing for tpm/rpm rate limits 2024-01-18 15:25:29 -08:00
Krrish Dholakia
aef59c554f feat(parallel_request_limiter.py): add support for tpm/rpm limits 2024-01-18 13:52:15 -08:00
Krrish Dholakia
1ea3833ef7 fix(parallel_request_limiter.py): decrement count for failed llm calls
https://github.com/BerriAI/litellm/issues/1477
2024-01-18 12:42:14 -08:00