Krish Dholakia
|
ce9ede6110
|
Merge pull request #3370 from BerriAI/litellm_latency_buffer
fix(lowest_latency.py): allow setting a buffer for getting values within a certain latency threshold
|
2024-04-30 16:01:47 -07:00 |
|
Krrish Dholakia
|
0267069c6a
|
fix(router.py): return routing args as dict
|
2024-04-30 15:39:14 -07:00 |
|
Krrish Dholakia
|
6a2b4bcab8
|
fix(router.py): only check /v1 for azure ai studio models
Fixes https://github.com/BerriAI/litellm/issues/3346
|
2024-04-30 15:29:50 -07:00 |
|
Krrish Dholakia
|
470dbf9745
|
build(ui): allow user to modify 'lowest_latency_buffer' on UI
|
2024-04-30 13:54:43 -07:00 |
|
Krrish Dholakia
|
f0e48cdd53
|
fix(router.py): raise better exception when no deployments are available
Fixes https://github.com/BerriAI/litellm/issues/3355
|
2024-04-29 18:48:04 -07:00 |
|
Krrish Dholakia
|
e7b4882e97
|
fix(router.py): fix high-traffic bug for usage-based-routing-v2
|
2024-04-29 16:48:01 -07:00 |
|
Krrish Dholakia
|
f10a066d36
|
fix(lowest_tpm_rpm_v2.py): add more detail to 'No deployments available' error message
|
2024-04-29 15:04:37 -07:00 |
|
Krrish Dholakia
|
b9c0b55e7c
|
test: fix test - set num_retries=0
|
2024-04-27 21:02:19 -07:00 |
|
Krrish Dholakia
|
280148543f
|
fix(router.py): fix trailing slash handling for api base which contains /v1
|
2024-04-27 17:36:28 -07:00 |
|
Krrish Dholakia
|
ec19c1654b
|
fix(router.py): set initial value of default litellm params to none
|
2024-04-27 17:22:50 -07:00 |
|
Krrish Dholakia
|
87aad0d2c8
|
fix(router.py): fix router should retry logic
|
2024-04-27 15:59:38 -07:00 |
|
Krrish Dholakia
|
9f24421d44
|
fix(router.py): fix router should_retry
|
2024-04-27 15:13:20 -07:00 |
|
Krrish Dholakia
|
5e0bd5982e
|
fix(router.py): fix sync should_retry logic
|
2024-04-27 14:48:07 -07:00 |
|
Krish Dholakia
|
1a06f009d1
|
Merge branch 'main' into litellm_default_router_retries
|
2024-04-27 11:21:57 -07:00 |
|
Krrish Dholakia
|
e05764bdb7
|
fix(router.py): add /v1/ if missing to base url, for openai-compatible api's
Fixes https://github.com/BerriAI/litellm/issues/2279
|
2024-04-26 17:05:07 -07:00 |
|
Krrish Dholakia
|
180718c33f
|
fix(router.py): support verify_ssl flag
Fixes https://github.com/BerriAI/litellm/issues/3162#issuecomment-2075273807
|
2024-04-26 15:38:01 -07:00 |
|
Krrish Dholakia
|
160acc085a
|
fix(router.py): fix default retry logic
|
2024-04-25 11:57:27 -07:00 |
|
Ishaan Jaff
|
4e707af592
|
Revert "fix(router.py): fix max retries on set_client"
This reverts commit 821844c1a3 .
|
2024-04-24 23:19:14 -07:00 |
|
Krrish Dholakia
|
72dffdba39
|
fix(router.py): fix default retry on router
|
2024-04-24 23:06:53 -07:00 |
|
Krrish Dholakia
|
821844c1a3
|
fix(router.py): fix max retries on set_client
|
2024-04-24 22:03:01 -07:00 |
|
Ishaan Jaff
|
2c7f4695d9
|
Merge pull request #3283 from BerriAI/litellm_debug_lowest_latency
[Fix] Add better observability for debugging lowest latency routing
|
2024-04-24 20:42:52 -07:00 |
|
Ishaan Jaff
|
212369498e
|
fix - set latency stats in kwargs
|
2024-04-24 20:13:45 -07:00 |
|
Krrish Dholakia
|
5650e8ea44
|
feat(router.py): support mock testing fallbacks flag
allow user to test if fallbacks work as expected with a `mock_testing_fallbacks = True` flag set during a call
|
2024-04-24 20:13:10 -07:00 |
|
Krrish Dholakia
|
94cbe5516e
|
feat(router.py): support mock testing fallbacks flag on router
|
2024-04-24 17:33:00 -07:00 |
|
Krrish Dholakia
|
f54510b6ee
|
fix(proxy_server.py): fix /config/update /
allows updating router config via UI and having the change be propogated across all proxy instances by persisting config changes to the db
|
2024-04-24 16:42:42 -07:00 |
|
Krrish Dholakia
|
bae6f41017
|
build(add-fallbacks-on-UI): allows admin to add fallbacks on the UI
|
2024-04-24 15:40:02 -07:00 |
|
Ishaan Jaff
|
3d1a158b63
|
feat - update deployments
|
2024-04-24 09:53:42 -07:00 |
|
Ishaan Jaff
|
41ab5f2f56
|
fix - updating router settings from DB
|
2024-04-23 12:07:58 -07:00 |
|
Krrish Dholakia
|
9d2726c2ac
|
fix(proxy_server.py): handle router being initialized without a model list
|
2024-04-23 10:52:28 -07:00 |
|
Krrish Dholakia
|
a520e1bd6f
|
fix(router.py): add random shuffle and tpm-based shuffle for async shuffle logic
|
2024-04-22 12:58:59 -07:00 |
|
Krrish Dholakia
|
b96741e4f4
|
fix(router.py): async simple-shuffle support
|
2024-04-20 15:01:12 -07:00 |
|
Krrish Dholakia
|
c96ca1f85e
|
fix(router.py): improve debug logsd
|
2024-04-20 13:12:12 -07:00 |
|
Krrish Dholakia
|
0f69f0b44e
|
test(test_router_max_parallel_requests.py): more extensive testing for setting max parallel requests
|
2024-04-20 12:56:54 -07:00 |
|
Krrish Dholakia
|
7aa737cf10
|
fix(router.py): add if router caching setup on info logs
|
2024-04-20 12:34:09 -07:00 |
|
Krrish Dholakia
|
47e9d5f2ec
|
fix(router.py): fix init line for self.default_max_parallel_requests
|
2024-04-20 12:08:21 -07:00 |
|
Krrish Dholakia
|
4c78f8f309
|
fix(router.py): calculate max_parallel_requests from given tpm limits
use the azure formula to calculate rpm -> max_parallel_requests based on a deployment's tpm limits
|
2024-04-20 10:43:18 -07:00 |
|
Krish Dholakia
|
f1340b52dc
|
Merge pull request #3153 from BerriAI/litellm_usage_based_routing_v2_improvements
usage based routing v2 improvements - unit testing + *NEW* async + sync 'pre_call_checks'
|
2024-04-18 22:16:16 -07:00 |
|
Krrish Dholakia
|
9c42c847a5
|
fix(router.py): instrument pre-call-checks for all openai endpoints
|
2024-04-18 21:54:25 -07:00 |
|
Krrish Dholakia
|
81573b2dd9
|
fix(test_lowest_tpm_rpm_routing_v2.py): unit testing for usage-based-routing-v2
|
2024-04-18 21:38:00 -07:00 |
|
Ishaan Jaff
|
67d356b933
|
fix - show api base on hanging requests
|
2024-04-18 20:58:02 -07:00 |
|
Krrish Dholakia
|
2ffd057042
|
test(test_models.py): ensure only admin can call /health
|
2024-04-16 18:13:40 -07:00 |
|
Ishaan Jaff
|
caec0a3938
|
fix - make router set_settings non blocking
|
2024-04-16 18:04:21 -07:00 |
|
Ishaan Jaff
|
7e99854d05
|
Merge pull request #3079 from BerriAI/litellm_router_save_settings_ui
UI - Save / Edit router settings UI
|
2024-04-16 16:57:42 -07:00 |
|
Ishaan Jaff
|
59b154f152
|
feat - update router settings on Admin UI
|
2024-04-16 15:36:26 -07:00 |
|
Krrish Dholakia
|
13cd252f3e
|
fix(proxy_server.py): ensure id used in delete deployment matches id used in litellm Router
|
2024-04-16 15:17:18 -07:00 |
|
Ishaan Jaff
|
e271ce8030
|
router - get settings
|
2024-04-16 14:22:54 -07:00 |
|
Krrish Dholakia
|
2d4fe072ad
|
fix(proxy_server.py): fix delete models endpoint
https://github.com/BerriAI/litellm/issues/2951
|
2024-04-15 18:34:58 -07:00 |
|
Krrish Dholakia
|
e4bcc51e44
|
build(ui): add vertex ai models via ui
|
2024-04-15 15:59:36 -07:00 |
|
Krish Dholakia
|
0d2a75d301
|
Merge pull request #2981 from grav/grav/default_model_name_to_none
Default model_name to None in _aembedding
|
2024-04-15 14:45:01 -07:00 |
|
Krrish Dholakia
|
43c37c31ea
|
fix(proxy_server.py): return none if no model list set in router
https://github.com/BerriAI/litellm/issues/2979
|
2024-04-15 09:02:18 -07:00 |
|