Ishaan Jaff
|
5ba4f5b4f1
|
feat - include model name in cool down alerts
|
2024-05-16 12:52:15 -07:00 |
|
Ishaan Jaff
|
48c92b1612
|
fix - router show better client side errors
|
2024-05-16 09:01:27 -07:00 |
|
Krrish Dholakia
|
7d71e41992
|
fix(router.py): fix validation error for default fallback
|
2024-05-15 13:23:00 -07:00 |
|
Krrish Dholakia
|
5c33145ee6
|
fix(router.py): add validation for how router fallbacks are setup
prevent user errors
|
2024-05-15 10:44:16 -07:00 |
|
Ishaan Jaff
|
2d08d766ed
|
feat - router use _is_cooldown_required
|
2024-05-15 10:03:55 -07:00 |
|
Ishaan Jaff
|
543909a200
|
feat - don't cooldown deployment on BadRequestError
|
2024-05-15 09:03:27 -07:00 |
|
Krrish Dholakia
|
cb758fbfad
|
fix(router.py): error string fix
|
2024-05-14 11:20:57 -07:00 |
|
Krrish Dholakia
|
b054f39bab
|
fix(init.py): set 'default_fallbacks' as a litellm_setting
|
2024-05-14 11:15:53 -07:00 |
|
sumanth
|
4bbd9c866c
|
addressed comments
|
2024-05-14 10:05:19 +05:30 |
|
Krrish Dholakia
|
55b62f3334
|
fix(router.py): fix typing
|
2024-05-13 18:06:10 -07:00 |
|
Krrish Dholakia
|
6f20389bd5
|
feat(router.py): enable default fallbacks
allow user to define a generic list of fallbacks, in case a new deployment is bad
Closes https://github.com/BerriAI/litellm/issues/3623
|
2024-05-13 17:49:56 -07:00 |
|
Krrish Dholakia
|
044177d5ff
|
fix(router.py): overloads fix
|
2024-05-13 17:04:04 -07:00 |
|
Krrish Dholakia
|
684e4e8c89
|
fix(router.py): overloads for better router.acompletion typing
|
2024-05-13 14:27:16 -07:00 |
|
Krrish Dholakia
|
f162835937
|
fix(router.py): give an 'info' log when fallbacks work successfully
|
2024-05-13 10:17:32 -07:00 |
|
Krrish Dholakia
|
56b6efae50
|
fix(slack_alerting.py): don't fire spam alerts when backend api call fails
|
2024-05-13 10:04:43 -07:00 |
|
Krrish Dholakia
|
8575cdf562
|
fix(router.py): fix error message to return if pre-call-checks + allowed model region
|
2024-05-13 09:04:38 -07:00 |
|
Krish Dholakia
|
784ae85ba0
|
Merge branch 'main' into litellm_bedrock_command_r_support
|
2024-05-11 21:24:42 -07:00 |
|
Ishaan Jaff
|
f862539282
|
fix get healthy deployments
|
2024-05-11 19:46:35 -07:00 |
|
Ishaan Jaff
|
8f2e61dccc
|
fix - test router fallbacks
|
2024-05-11 19:13:22 -07:00 |
|
Ishaan Jaff
|
ffdf68d7e8
|
fix - _time_to_sleep_before_retry
|
2024-05-11 19:08:10 -07:00 |
|
Ishaan Jaff
|
a3b4074c22
|
unify sync and async logic for retries
|
2024-05-11 18:17:04 -07:00 |
|
Ishaan Jaff
|
04bb2bf9f2
|
fix _time_to_sleep_before_retry
|
2024-05-11 18:05:12 -07:00 |
|
Ishaan Jaff
|
c57ddf0537
|
fix _time_to_sleep_before_retry logic
|
2024-05-11 18:00:02 -07:00 |
|
Ishaan Jaff
|
9ca793fffd
|
router - clean up should_retry_this_error
|
2024-05-11 17:30:21 -07:00 |
|
Ishaan Jaff
|
6a967b3267
|
retry logic on router
|
2024-05-11 17:04:19 -07:00 |
|
Krrish Dholakia
|
bd0c3a81cb
|
fix(bedrock_httpx.py): working async bedrock command r calls
|
2024-05-11 16:45:20 -07:00 |
|
Ishaan Jaff
|
b71f35de72
|
Merge pull request #3585 from BerriAI/litellm_router_batch_comp
[Litellm Proxy + litellm.Router] - Pass the same message/prompt to N models
|
2024-05-11 13:51:45 -07:00 |
|
Ishaan Jaff
|
6704b32e44
|
feat - router async batch acompletion
|
2024-05-11 13:08:16 -07:00 |
|
Krish Dholakia
|
7f64c61275
|
Merge pull request #3582 from BerriAI/litellm_explicit_region_name_setting
feat(router.py): allow setting model_region in litellm_params
|
2024-05-11 11:36:22 -07:00 |
|
Krrish Dholakia
|
691c185ff8
|
feat(router.py): support region routing for bedrock, vertex ai, watsonx
|
2024-05-11 11:04:00 -07:00 |
|
Krrish Dholakia
|
2ed155b4d4
|
feat(router.py): allow setting model_region in litellm_params
Closes https://github.com/BerriAI/litellm/issues/3580
|
2024-05-11 10:18:08 -07:00 |
|
Krish Dholakia
|
997ef2e480
|
Merge pull request #3507 from Manouchehri/oidc-3505-part-1
Initial OIDC support (Google/GitHub/CircleCI -> Amazon Bedrock & Azure OpenAI)
|
2024-05-11 09:25:17 -07:00 |
|
Krish Dholakia
|
1510f3a37a
|
Merge pull request #3576 from BerriAI/litellm_langfuse_fix
fix(langfuse.py): fix logging user_id in trace param on new trace creation
|
2024-05-10 19:27:34 -07:00 |
|
Krrish Dholakia
|
3a98b6b8df
|
fix(langfuse.py): fix logging user_id in trace param on new trace creation
Closes https://github.com/BerriAI/litellm/issues/3560
|
2024-05-10 18:25:07 -07:00 |
|
Ishaan Jaff
|
58acc76352
|
fix auto inferring region
|
2024-05-10 16:08:05 -07:00 |
|
Ishaan Jaff
|
2848c0ff2b
|
fix AUTO_INFER_REGION
|
2024-05-10 14:08:38 -07:00 |
|
Ishaan Jaff
|
4faaf30fe1
|
fix bug upsert_deployment
|
2024-05-10 13:54:52 -07:00 |
|
Ishaan Jaff
|
70e37877c2
|
fix - upsert_deployment logic
|
2024-05-10 13:41:51 -07:00 |
|
Ishaan Jaff
|
99fd2228ba
|
fix feature flag logic
|
2024-05-10 12:50:46 -07:00 |
|
Ishaan Jaff
|
db22da0dbc
|
fix - explain why behind feature flag
|
2024-05-10 12:39:19 -07:00 |
|
Ishaan Jaff
|
437af37f97
|
fix hide - _auto_infer_region behind a feature flag
|
2024-05-10 12:38:06 -07:00 |
|
Ishaan Jaff
|
4244bb8a57
|
fix - router add model logic
|
2024-05-10 12:32:16 -07:00 |
|
Krrish Dholakia
|
62ba6f20f1
|
test(test_router_fallbacks.py): fix test
|
2024-05-10 09:58:40 -07:00 |
|
Krrish Dholakia
|
0ea8222508
|
feat(router.py): enable filtering model group by 'allowed_model_region'
|
2024-05-08 22:10:17 -07:00 |
|
Ishaan Jaff
|
051f20ca4b
|
feat - send alert on cooling down a deploymeny
|
2024-05-08 14:14:14 -07:00 |
|
David Manouchehri
|
5d62f8ee6c
|
fix(router.py): Add missing azure_ad_token param.
|
2024-05-08 15:47:46 +00:00 |
|
Krish Dholakia
|
1eb75273cf
|
Merge branch 'main' into litellm_ui_fixes_6
|
2024-05-07 22:01:04 -07:00 |
|
Krrish Dholakia
|
ae442f895b
|
feat(ui/model_dashboard.tsx): show if model is config or db model
|
2024-05-07 21:34:18 -07:00 |
|
Krrish Dholakia
|
b9ec7acb08
|
feat(model_dashboard.tsx): allow user to edit input cost per token for model on ui
also contains fixes for `/model/update`
|
2024-05-07 20:57:21 -07:00 |
|
Krrish Dholakia
|
1882ee1c4c
|
feat(ui/model_dashboard.tsx): show if model is config or db model
|
2024-05-07 18:29:14 -07:00 |
|