Commit graph

508 commits

Author SHA1 Message Date
Krrish Dholakia
c70fbd0654 fix(router.py): fix router should_retry 2024-04-27 15:13:20 -07:00
Krrish Dholakia
71d63c33da fix(router.py): fix sync should_retry logic 2024-04-27 14:48:07 -07:00
Krish Dholakia
26bacef87b Merge branch 'main' into litellm_default_router_retries 2024-04-27 11:21:57 -07:00
Krrish Dholakia
069d1f863d fix(router.py): add /v1/ if missing to base url, for openai-compatible api's
Fixes https://github.com/BerriAI/litellm/issues/2279
2024-04-26 17:05:07 -07:00
Krrish Dholakia
ca4918b9a7 fix(router.py): support verify_ssl flag
Fixes https://github.com/BerriAI/litellm/issues/3162#issuecomment-2075273807
2024-04-26 15:38:01 -07:00
Krrish Dholakia
f1b2405fe0 fix(router.py): fix default retry logic 2024-04-25 11:57:27 -07:00
Ishaan Jaff
c0b554169c Revert "fix(router.py): fix max retries on set_client"
This reverts commit 821844c1a3.
2024-04-24 23:19:14 -07:00
Krrish Dholakia
399d6647e6 fix(router.py): fix default retry on router 2024-04-24 23:06:53 -07:00
Krrish Dholakia
2154ec624b fix(router.py): fix max retries on set_client 2024-04-24 22:03:01 -07:00
Ishaan Jaff
ad637c15ce Merge pull request #3283 from BerriAI/litellm_debug_lowest_latency
[Fix] Add better observability for debugging lowest latency routing
2024-04-24 20:42:52 -07:00
Ishaan Jaff
5dae1cf303 fix - set latency stats in kwargs 2024-04-24 20:13:45 -07:00
Krrish Dholakia
1988ce3247 feat(router.py): support mock testing fallbacks flag
allow user to test if fallbacks work as expected with a `mock_testing_fallbacks = True` flag set during a call
2024-04-24 20:13:10 -07:00
Krrish Dholakia
609793df83 feat(router.py): support mock testing fallbacks flag on router 2024-04-24 17:33:00 -07:00
Krrish Dholakia
f4bd85a489 fix(proxy_server.py): fix /config/update/
allows updating router config via UI and having the change be propogated across all proxy instances by persisting config changes to the db
2024-04-24 16:42:42 -07:00
Krrish Dholakia
50f5241a4e build(add-fallbacks-on-UI): allows admin to add fallbacks on the UI 2024-04-24 15:40:02 -07:00
Ishaan Jaff
5ff0bad6a4 feat - update deployments 2024-04-24 09:53:42 -07:00
Ishaan Jaff
634139ba59 fix - updating router settings from DB 2024-04-23 12:07:58 -07:00
Krrish Dholakia
f1f08af785 fix(proxy_server.py): handle router being initialized without a model list 2024-04-23 10:52:28 -07:00
Krrish Dholakia
cce1aefdfb fix(router.py): add random shuffle and tpm-based shuffle for async shuffle logic 2024-04-22 12:58:59 -07:00
Krrish Dholakia
9cf8817dad fix(router.py): async simple-shuffle support 2024-04-20 15:01:12 -07:00
Krrish Dholakia
485ad73133 fix(router.py): improve debug logsd 2024-04-20 13:12:12 -07:00
Krrish Dholakia
9f6e90e17d test(test_router_max_parallel_requests.py): more extensive testing for setting max parallel requests 2024-04-20 12:56:54 -07:00
Krrish Dholakia
a9108cbdc2 fix(router.py): add if router caching setup on info logs 2024-04-20 12:34:09 -07:00
Krrish Dholakia
27a32e930e fix(router.py): fix init line for self.default_max_parallel_requests 2024-04-20 12:08:21 -07:00
Krrish Dholakia
22d3121f48 fix(router.py): calculate max_parallel_requests from given tpm limits
use the azure formula to calculate rpm -> max_parallel_requests based on a deployment's tpm limits
2024-04-20 10:43:18 -07:00
Krish Dholakia
1c6f6592ea Merge pull request #3153 from BerriAI/litellm_usage_based_routing_v2_improvements
usage based routing v2 improvements - unit testing + *NEW* async + sync 'pre_call_checks'
2024-04-18 22:16:16 -07:00
Krrish Dholakia
5bb73dc9c0 fix(router.py): instrument pre-call-checks for all openai endpoints 2024-04-18 21:54:25 -07:00
Krrish Dholakia
376ee4e9d7 fix(test_lowest_tpm_rpm_routing_v2.py): unit testing for usage-based-routing-v2 2024-04-18 21:38:00 -07:00
Ishaan Jaff
653dc44c08 fix - show api base on hanging requests 2024-04-18 20:58:02 -07:00
Krrish Dholakia
afdaa349fa test(test_models.py): ensure only admin can call /health 2024-04-16 18:13:40 -07:00
Ishaan Jaff
864533834a fix - make router set_settings non blocking 2024-04-16 18:04:21 -07:00
Ishaan Jaff
99065cb6b4 Merge pull request #3079 from BerriAI/litellm_router_save_settings_ui
UI - Save / Edit router settings UI
2024-04-16 16:57:42 -07:00
Ishaan Jaff
eadca455ad feat - update router settings on Admin UI 2024-04-16 15:36:26 -07:00
Krrish Dholakia
c6ad02b167 fix(proxy_server.py): ensure id used in delete deployment matches id used in litellm Router 2024-04-16 15:17:18 -07:00
Ishaan Jaff
fb3edc6d92 router - get settings 2024-04-16 14:22:54 -07:00
Krrish Dholakia
200e8784f3 fix(proxy_server.py): fix delete models endpoint
https://github.com/BerriAI/litellm/issues/2951
2024-04-15 18:34:58 -07:00
Krrish Dholakia
7179bf753a build(ui): add vertex ai models via ui 2024-04-15 15:59:36 -07:00
Krish Dholakia
0bc7c98265 Merge pull request #2981 from grav/grav/default_model_name_to_none
Default model_name to None in _aembedding
2024-04-15 14:45:01 -07:00
Krrish Dholakia
9c183fcd9f fix(proxy_server.py): return none if no model list set in router
https://github.com/BerriAI/litellm/issues/2979
2024-04-15 09:02:18 -07:00
Krrish Dholakia
c177407f7b test(test_openai_endpoints.py): add concurrency testing for user defined rate limits on proxy 2024-04-12 18:56:13 -07:00
Krrish Dholakia
d9b8f63e86 fix(router.py): support pre_call_rpm_check for lowest_tpm_rpm_v2 routing
have routing strategies expose an ‘update rpm’ function; for checking + updating rpm pre call
2024-04-12 18:25:14 -07:00
Krrish Dholakia
5f1fcaad6d fix(router.py): create a semaphore for each deployment with rpm
run semaphore logic for each deployment with rpm
2024-04-12 18:03:23 -07:00
Krrish Dholakia
87c621d726 fix(router.py): initial commit for semaphores on router 2024-04-12 17:59:05 -07:00
Mikkel Gravgaard
c3a8f9a447 Default model_name to None in _aembedding 2024-04-12 11:33:03 +02:00
Ishaan Jaff
0d8063ee49 fix - stop printing api_key in debug mode 2024-04-11 15:05:22 -07:00
Krrish Dholakia
0863c10b0b fix(router.py): make sure pre call rpm check runs even when model not in model cost map 2024-04-11 09:27:46 -07:00
Krrish Dholakia
f5ed34f801 fix(router.py): handle 1 deployment being picked 2024-04-10 18:32:54 -07:00
Krrish Dholakia
5744d17086 fix(router.py): move specific deployment check outside common functions 2024-04-10 18:06:31 -07:00
Krrish Dholakia
8f06c2d8c4 fix(router.py): fix datetime object 2024-04-10 17:55:24 -07:00
Krrish Dholakia
384245e331 fix(router.py): make get_cooldown_deployment logic async 2024-04-10 16:57:01 -07:00