Commit graph

654 commits

Author SHA1 Message Date
Ishaan Jaff
92a4df00d4 fix add doc string for abatch_completion_one_model_multiple_requests 2024-05-20 17:51:08 -07:00
Ishaan Jaff
5be966dc09 feat - add abatch_completion_one_model_multiple_requests 2024-05-20 17:47:25 -07:00
Ishaan Jaff
8413fdf4c7
Merge branch 'main' into litellm_standardize_slack_exception_msg_format 2024-05-20 16:39:41 -07:00
Ishaan Jaff
f11de863f6 fix - standardize format of exceptions occuring on slack alerts 2024-05-20 16:29:16 -07:00
Ishaan Jaff
6368d5a725 feat - read cooldown time from exception header 2024-05-17 18:50:33 -07:00
David Manouchehri
50accc327c
Fix(router.py): Kill a bug that forced Azure OpenAI to have an API key, even though we can use OIDC instead. 2024-05-17 00:37:56 +00:00
Ishaan Jaff
d16a6c03a2 feat - include model name in cool down alerts 2024-05-16 12:52:15 -07:00
Ishaan Jaff
848561a8a7 fix - router show better client side errors 2024-05-16 09:01:27 -07:00
Krrish Dholakia
d9ad7c6218 fix(router.py): fix validation error for default fallback 2024-05-15 13:23:00 -07:00
Krrish Dholakia
dba713ea43 fix(router.py): add validation for how router fallbacks are setup
prevent user errors
2024-05-15 10:44:16 -07:00
Ishaan Jaff
f17f0a09d8 feat - router use _is_cooldown_required 2024-05-15 10:03:55 -07:00
Ishaan Jaff
52f8c39bbf feat - don't cooldown deployment on BadRequestError 2024-05-15 09:03:27 -07:00
Krrish Dholakia
151902f7d9 fix(router.py): error string fix 2024-05-14 11:20:57 -07:00
Krrish Dholakia
7557b3e2ff fix(init.py): set 'default_fallbacks' as a litellm_setting 2024-05-14 11:15:53 -07:00
sumanth
71e0294485 addressed comments 2024-05-14 10:05:19 +05:30
Krrish Dholakia
38988f030a fix(router.py): fix typing 2024-05-13 18:06:10 -07:00
Krrish Dholakia
5488bf4921 feat(router.py): enable default fallbacks
allow user to define a generic list of fallbacks, in case a new deployment is bad

Closes https://github.com/BerriAI/litellm/issues/3623
2024-05-13 17:49:56 -07:00
Krrish Dholakia
af9489bbfd fix(router.py): overloads fix 2024-05-13 17:04:04 -07:00
Krrish Dholakia
1312eece6d fix(router.py): overloads for better router.acompletion typing 2024-05-13 14:27:16 -07:00
Krrish Dholakia
7f6e933372 fix(router.py): give an 'info' log when fallbacks work successfully 2024-05-13 10:17:32 -07:00
Krrish Dholakia
13e1577753 fix(slack_alerting.py): don't fire spam alerts when backend api call fails 2024-05-13 10:04:43 -07:00
Krrish Dholakia
5342b3dc05 fix(router.py): fix error message to return if pre-call-checks + allowed model region 2024-05-13 09:04:38 -07:00
Krish Dholakia
1d651c6049
Merge branch 'main' into litellm_bedrock_command_r_support 2024-05-11 21:24:42 -07:00
Ishaan Jaff
61a3e5d5a9 fix get healthy deployments 2024-05-11 19:46:35 -07:00
Ishaan Jaff
7930653872 fix - test router fallbacks 2024-05-11 19:13:22 -07:00
Ishaan Jaff
4d648a6d89 fix - _time_to_sleep_before_retry 2024-05-11 19:08:10 -07:00
Ishaan Jaff
a978326c99 unify sync and async logic for retries 2024-05-11 18:17:04 -07:00
Ishaan Jaff
6e39760779 fix _time_to_sleep_before_retry 2024-05-11 18:05:12 -07:00
Ishaan Jaff
3e6097d9f8 fix _time_to_sleep_before_retry logic 2024-05-11 18:00:02 -07:00
Ishaan Jaff
104fd4d048 router - clean up should_retry_this_error 2024-05-11 17:30:21 -07:00
Ishaan Jaff
18c2da213a retry logic on router 2024-05-11 17:04:19 -07:00
Krrish Dholakia
49ab1a1d3f fix(bedrock_httpx.py): working async bedrock command r calls 2024-05-11 16:45:20 -07:00
Ishaan Jaff
bf909a89f8
Merge pull request #3585 from BerriAI/litellm_router_batch_comp
[Litellm Proxy + litellm.Router] - Pass the same message/prompt to N models
2024-05-11 13:51:45 -07:00
Ishaan Jaff
9156b7448a feat - router async batch acompletion 2024-05-11 13:08:16 -07:00
Krish Dholakia
86d0c0ae4e
Merge pull request #3582 from BerriAI/litellm_explicit_region_name_setting
feat(router.py): allow setting model_region in litellm_params
2024-05-11 11:36:22 -07:00
Krrish Dholakia
6714854bb7 feat(router.py): support region routing for bedrock, vertex ai, watsonx 2024-05-11 11:04:00 -07:00
Krrish Dholakia
ebc927f1c8 feat(router.py): allow setting model_region in litellm_params
Closes https://github.com/BerriAI/litellm/issues/3580
2024-05-11 10:18:08 -07:00
Krish Dholakia
40063798bd
Merge pull request #3507 from Manouchehri/oidc-3505-part-1
Initial OIDC support (Google/GitHub/CircleCI -> Amazon Bedrock & Azure OpenAI)
2024-05-11 09:25:17 -07:00
Krish Dholakia
363cdb1a0c
Merge pull request #3576 from BerriAI/litellm_langfuse_fix
fix(langfuse.py): fix logging user_id in trace param on new trace creation
2024-05-10 19:27:34 -07:00
Krrish Dholakia
0115c79222 fix(langfuse.py): fix logging user_id in trace param on new trace creation
Closes https://github.com/BerriAI/litellm/issues/3560
2024-05-10 18:25:07 -07:00
Ishaan Jaff
7d96272d52 fix auto inferring region 2024-05-10 16:08:05 -07:00
Ishaan Jaff
c744851d13 fix AUTO_INFER_REGION 2024-05-10 14:08:38 -07:00
Ishaan Jaff
9bbb13c373 fix bug upsert_deployment 2024-05-10 13:54:52 -07:00
Ishaan Jaff
5c69515a13 fix - upsert_deployment logic 2024-05-10 13:41:51 -07:00
Ishaan Jaff
547976448f fix feature flag logic 2024-05-10 12:50:46 -07:00
Ishaan Jaff
75d6658bbc fix - explain why behind feature flag 2024-05-10 12:39:19 -07:00
Ishaan Jaff
6fd6490d63 fix hide - _auto_infer_region behind a feature flag 2024-05-10 12:38:06 -07:00
Ishaan Jaff
9d3f01c6ae fix - router add model logic 2024-05-10 12:32:16 -07:00
Krrish Dholakia
cdec7a414f test(test_router_fallbacks.py): fix test 2024-05-10 09:58:40 -07:00
Krrish Dholakia
3d18897d69 feat(router.py): enable filtering model group by 'allowed_model_region' 2024-05-08 22:10:17 -07:00