Commit graph

471 commits

Author SHA1 Message Date
Ishaan Jaff
17c6ea2272 fix - update abatch_completion docstring 2024-05-28 22:27:09 -07:00
Ishaan Jaff
aca5118a83 feat - router add abatch_completion 2024-05-28 22:19:33 -07:00
Krrish Dholakia
98ebcad52d fix(router.py): support batch completions fastest response streaming 2024-05-28 21:51:09 -07:00
Krrish Dholakia
012bde0b07 fix(router.py): support comma-separated model list for batch completion fastest response 2024-05-28 21:34:37 -07:00
Krrish Dholakia
792b25c772 feat(proxy_server.py): enable batch completion fastest response calls on proxy
introduces new `fastest_response` flag for enabling the call
2024-05-28 20:09:31 -07:00
Krrish Dholakia
3676c00235 feat(router.py): support fastest response batch completion call
returns fastest response. cancels others.
2024-05-28 19:44:41 -07:00
Krish Dholakia
01dc798876 Merge pull request #3847 from paneru-rajan/improve-validate-fallback-method
Improve validate-fallbacks method
2024-05-27 18:18:35 -07:00
Krrish Dholakia
23b28601b7 fix(fix-'get_model_group_info'-to-return-a-default-value-if-unmapped-model-group): allows model hub to return all model groupss 2024-05-27 13:53:01 -07:00
Ishaan Jaff
69ea7d57fb feat - show openai params on model hub ui 2024-05-27 08:49:51 -07:00
Krrish Dholakia
8e9a3fef81 feat(proxy_server.py): expose new /model_group/info endpoint
returns model-group level info on supported params, max tokens, pricing, etc.
2024-05-26 14:07:35 -07:00
sujan100000
45dd4d37d0 Improve validate-fallbacks method
* No need to check for fallback_params length
* Instead of asserting, used if condition and raised valueError
* Improved Error message
2024-05-26 19:09:07 +09:30
Krrish Dholakia
cd34d00d80 fix(router.py): fix pre call check
only check if response_format supported by model, if pre-call check enabled
2024-05-24 20:09:15 -07:00
Krrish Dholakia
4536ed6f6e feat(slack_alerting.py): refactor region outage alerting to do model based alerting instead
Unable to extract azure region from api base, makes sense to start with model alerting and then move to region
2024-05-24 19:10:33 -07:00
Ishaan Jaff
84f8ead4a1 fix test_filter_invalid_params_pre_call_check 2024-05-23 21:16:32 -07:00
Krrish Dholakia
c50074a0b7 feat(ui/model_dashboard.tsx): add databricks models via admin ui 2024-05-23 20:28:54 -07:00
Krrish Dholakia
c989b92801 feat(router.py): Fixes https://github.com/BerriAI/litellm/issues/3769 2024-05-21 17:24:51 -07:00
Krish Dholakia
c0e43a7296 Merge pull request #3412 from sumanth13131/usage-based-routing-ttl-on-cache
usage-based-routing-ttl-on-cache
2024-05-21 07:58:41 -07:00
Ishaan Jaff
ef9372ce00 fix add doc string for abatch_completion_one_model_multiple_requests 2024-05-20 17:51:08 -07:00
Ishaan Jaff
13c787f9b5 feat - add abatch_completion_one_model_multiple_requests 2024-05-20 17:47:25 -07:00
Ishaan Jaff
7e6c9274fc Merge branch 'main' into litellm_standardize_slack_exception_msg_format 2024-05-20 16:39:41 -07:00
Ishaan Jaff
2ccef68c2d fix - standardize format of exceptions occuring on slack alerts 2024-05-20 16:29:16 -07:00
Ishaan Jaff
3c4bf52509 feat - read cooldown time from exception header 2024-05-17 18:50:33 -07:00
David Manouchehri
61ef93a14a Fix(router.py): Kill a bug that forced Azure OpenAI to have an API key, even though we can use OIDC instead. 2024-05-17 00:37:56 +00:00
Ishaan Jaff
5ba4f5b4f1 feat - include model name in cool down alerts 2024-05-16 12:52:15 -07:00
Ishaan Jaff
48c92b1612 fix - router show better client side errors 2024-05-16 09:01:27 -07:00
Krrish Dholakia
7d71e41992 fix(router.py): fix validation error for default fallback 2024-05-15 13:23:00 -07:00
Krrish Dholakia
5c33145ee6 fix(router.py): add validation for how router fallbacks are setup
prevent user errors
2024-05-15 10:44:16 -07:00
Ishaan Jaff
2d08d766ed feat - router use _is_cooldown_required 2024-05-15 10:03:55 -07:00
Ishaan Jaff
543909a200 feat - don't cooldown deployment on BadRequestError 2024-05-15 09:03:27 -07:00
Krrish Dholakia
cb758fbfad fix(router.py): error string fix 2024-05-14 11:20:57 -07:00
Krrish Dholakia
b054f39bab fix(init.py): set 'default_fallbacks' as a litellm_setting 2024-05-14 11:15:53 -07:00
sumanth
4bbd9c866c addressed comments 2024-05-14 10:05:19 +05:30
Krrish Dholakia
55b62f3334 fix(router.py): fix typing 2024-05-13 18:06:10 -07:00
Krrish Dholakia
6f20389bd5 feat(router.py): enable default fallbacks
allow user to define a generic list of fallbacks, in case a new deployment is bad

Closes https://github.com/BerriAI/litellm/issues/3623
2024-05-13 17:49:56 -07:00
Krrish Dholakia
044177d5ff fix(router.py): overloads fix 2024-05-13 17:04:04 -07:00
Krrish Dholakia
684e4e8c89 fix(router.py): overloads for better router.acompletion typing 2024-05-13 14:27:16 -07:00
Krrish Dholakia
f162835937 fix(router.py): give an 'info' log when fallbacks work successfully 2024-05-13 10:17:32 -07:00
Krrish Dholakia
56b6efae50 fix(slack_alerting.py): don't fire spam alerts when backend api call fails 2024-05-13 10:04:43 -07:00
Krrish Dholakia
8575cdf562 fix(router.py): fix error message to return if pre-call-checks + allowed model region 2024-05-13 09:04:38 -07:00
Krish Dholakia
784ae85ba0 Merge branch 'main' into litellm_bedrock_command_r_support 2024-05-11 21:24:42 -07:00
Ishaan Jaff
f862539282 fix get healthy deployments 2024-05-11 19:46:35 -07:00
Ishaan Jaff
8f2e61dccc fix - test router fallbacks 2024-05-11 19:13:22 -07:00
Ishaan Jaff
ffdf68d7e8 fix - _time_to_sleep_before_retry 2024-05-11 19:08:10 -07:00
Ishaan Jaff
a3b4074c22 unify sync and async logic for retries 2024-05-11 18:17:04 -07:00
Ishaan Jaff
04bb2bf9f2 fix _time_to_sleep_before_retry 2024-05-11 18:05:12 -07:00
Ishaan Jaff
c57ddf0537 fix _time_to_sleep_before_retry logic 2024-05-11 18:00:02 -07:00
Ishaan Jaff
9ca793fffd router - clean up should_retry_this_error 2024-05-11 17:30:21 -07:00
Ishaan Jaff
6a967b3267 retry logic on router 2024-05-11 17:04:19 -07:00
Krrish Dholakia
bd0c3a81cb fix(bedrock_httpx.py): working async bedrock command r calls 2024-05-11 16:45:20 -07:00
Ishaan Jaff
b71f35de72 Merge pull request #3585 from BerriAI/litellm_router_batch_comp
[Litellm Proxy + litellm.Router] - Pass the same message/prompt to N models
2024-05-11 13:51:45 -07:00