Commit graph

654 commits

Author SHA1 Message Date
Krrish Dholakia
180718c33f fix(router.py): support verify_ssl flag
Fixes https://github.com/BerriAI/litellm/issues/3162#issuecomment-2075273807
2024-04-26 15:38:01 -07:00
Krrish Dholakia
160acc085a fix(router.py): fix default retry logic 2024-04-25 11:57:27 -07:00
Ishaan Jaff
4e707af592 Revert "fix(router.py): fix max retries on set_client"
This reverts commit 821844c1a3.
2024-04-24 23:19:14 -07:00
Krrish Dholakia
72dffdba39 fix(router.py): fix default retry on router 2024-04-24 23:06:53 -07:00
Krrish Dholakia
821844c1a3 fix(router.py): fix max retries on set_client 2024-04-24 22:03:01 -07:00
Ishaan Jaff
2c7f4695d9
Merge pull request #3283 from BerriAI/litellm_debug_lowest_latency
[Fix] Add better observability for debugging lowest latency routing
2024-04-24 20:42:52 -07:00
Ishaan Jaff
212369498e fix - set latency stats in kwargs 2024-04-24 20:13:45 -07:00
Krrish Dholakia
5650e8ea44 feat(router.py): support mock testing fallbacks flag
allow user to test if fallbacks work as expected with a `mock_testing_fallbacks = True` flag set during a call
2024-04-24 20:13:10 -07:00
Krrish Dholakia
94cbe5516e feat(router.py): support mock testing fallbacks flag on router 2024-04-24 17:33:00 -07:00
Krrish Dholakia
f54510b6ee fix(proxy_server.py): fix /config/update/
allows updating router config via UI and having the change be propogated across all proxy instances by persisting config changes to the db
2024-04-24 16:42:42 -07:00
Krrish Dholakia
bae6f41017 build(add-fallbacks-on-UI): allows admin to add fallbacks on the UI 2024-04-24 15:40:02 -07:00
Ishaan Jaff
3d1a158b63 feat - update deployments 2024-04-24 09:53:42 -07:00
Ishaan Jaff
41ab5f2f56 fix - updating router settings from DB 2024-04-23 12:07:58 -07:00
Krrish Dholakia
9d2726c2ac fix(proxy_server.py): handle router being initialized without a model list 2024-04-23 10:52:28 -07:00
Krrish Dholakia
a520e1bd6f fix(router.py): add random shuffle and tpm-based shuffle for async shuffle logic 2024-04-22 12:58:59 -07:00
Krrish Dholakia
b96741e4f4 fix(router.py): async simple-shuffle support 2024-04-20 15:01:12 -07:00
Krrish Dholakia
c96ca1f85e fix(router.py): improve debug logsd 2024-04-20 13:12:12 -07:00
Krrish Dholakia
0f69f0b44e test(test_router_max_parallel_requests.py): more extensive testing for setting max parallel requests 2024-04-20 12:56:54 -07:00
Krrish Dholakia
7aa737cf10 fix(router.py): add if router caching setup on info logs 2024-04-20 12:34:09 -07:00
Krrish Dholakia
47e9d5f2ec fix(router.py): fix init line for self.default_max_parallel_requests 2024-04-20 12:08:21 -07:00
Krrish Dholakia
4c78f8f309 fix(router.py): calculate max_parallel_requests from given tpm limits
use the azure formula to calculate rpm -> max_parallel_requests based on a deployment's tpm limits
2024-04-20 10:43:18 -07:00
Krish Dholakia
f1340b52dc
Merge pull request #3153 from BerriAI/litellm_usage_based_routing_v2_improvements
usage based routing v2 improvements - unit testing + *NEW* async + sync 'pre_call_checks'
2024-04-18 22:16:16 -07:00
Krrish Dholakia
9c42c847a5 fix(router.py): instrument pre-call-checks for all openai endpoints 2024-04-18 21:54:25 -07:00
Krrish Dholakia
81573b2dd9 fix(test_lowest_tpm_rpm_routing_v2.py): unit testing for usage-based-routing-v2 2024-04-18 21:38:00 -07:00
Ishaan Jaff
67d356b933 fix - show api base on hanging requests 2024-04-18 20:58:02 -07:00
Krrish Dholakia
2ffd057042 test(test_models.py): ensure only admin can call /health 2024-04-16 18:13:40 -07:00
Ishaan Jaff
caec0a3938 fix - make router set_settings non blocking 2024-04-16 18:04:21 -07:00
Ishaan Jaff
7e99854d05
Merge pull request #3079 from BerriAI/litellm_router_save_settings_ui
UI - Save / Edit router settings UI
2024-04-16 16:57:42 -07:00
Ishaan Jaff
59b154f152 feat - update router settings on Admin UI 2024-04-16 15:36:26 -07:00
Krrish Dholakia
13cd252f3e fix(proxy_server.py): ensure id used in delete deployment matches id used in litellm Router 2024-04-16 15:17:18 -07:00
Ishaan Jaff
e271ce8030 router - get settings 2024-04-16 14:22:54 -07:00
Krrish Dholakia
2d4fe072ad fix(proxy_server.py): fix delete models endpoint
https://github.com/BerriAI/litellm/issues/2951
2024-04-15 18:34:58 -07:00
Krrish Dholakia
e4bcc51e44 build(ui): add vertex ai models via ui 2024-04-15 15:59:36 -07:00
Krish Dholakia
0d2a75d301
Merge pull request #2981 from grav/grav/default_model_name_to_none
Default model_name to None in _aembedding
2024-04-15 14:45:01 -07:00
Krrish Dholakia
43c37c31ea fix(proxy_server.py): return none if no model list set in router
https://github.com/BerriAI/litellm/issues/2979
2024-04-15 09:02:18 -07:00
Krrish Dholakia
ea1574c160 test(test_openai_endpoints.py): add concurrency testing for user defined rate limits on proxy 2024-04-12 18:56:13 -07:00
Krrish Dholakia
c03b0bbb24 fix(router.py): support pre_call_rpm_check for lowest_tpm_rpm_v2 routing
have routing strategies expose an ‘update rpm’ function; for checking + updating rpm pre call
2024-04-12 18:25:14 -07:00
Krrish Dholakia
2267aeb803 fix(router.py): create a semaphore for each deployment with rpm
run semaphore logic for each deployment with rpm
2024-04-12 18:03:23 -07:00
Krrish Dholakia
a4e415b23c fix(router.py): initial commit for semaphores on router 2024-04-12 17:59:05 -07:00
Mikkel Gravgaard
759414004a Default model_name to None in _aembedding 2024-04-12 11:33:03 +02:00
Ishaan Jaff
921f6d307e fix - stop printing api_key in debug mode 2024-04-11 15:05:22 -07:00
Krrish Dholakia
84d43484c6 fix(router.py): make sure pre call rpm check runs even when model not in model cost map 2024-04-11 09:27:46 -07:00
Krrish Dholakia
266dba65e7 fix(router.py): handle 1 deployment being picked 2024-04-10 18:32:54 -07:00
Krrish Dholakia
52462e8bac fix(router.py): move specific deployment check outside common functions 2024-04-10 18:06:31 -07:00
Krrish Dholakia
37ac17aebd fix(router.py): fix datetime object 2024-04-10 17:55:24 -07:00
Krrish Dholakia
2531701a2a fix(router.py): make get_cooldown_deployment logic async 2024-04-10 16:57:01 -07:00
Krrish Dholakia
a47a719caa fix(router.py): generate consistent model id's
having the same id for a deployment, lets redis usage caching work across multiple instances
2024-04-10 15:23:57 -07:00
Krrish Dholakia
180cf9bd5c feat(lowest_tpm_rpm_v2.py): move to using redis.incr and redis.mget for getting model usage from redis
makes routing work across multiple instances
2024-04-10 14:56:23 -07:00
Krrish Dholakia
b2741933dc fix(proxy_cli.py): don't double load the router config
was causing callbacks to be instantiated twice - double couting usage in cache
2024-04-10 13:23:56 -07:00
Krish Dholakia
83f608dc5d
Merge pull request #2880 from BerriAI/litellm_api_base_alerting
feat(proxy/utils.py): return api base for request hanging alerts
2024-04-06 19:17:18 -07:00