litellm-mirror/litellm/router_utils
Krish Dholakia 550677e63d Litellm dev 12 11 2024 v2 (#7215)
* feat(bedrock/): add bedrock converse top k param

Closes https://github.com/BerriAI/litellm/issues/7087

* Fix bedrock empty content error (#7177)

* add resolver

* handle empty content on bedrock with default content

* use existing default message, tests

* Update tests/llm_translation/test_bedrock_completion.py

* fix tests

* Revert "add resolver"

This reverts commit c717e376ee.

* fallback to empty

---------

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix(factory.py): handle empty content blocks in messages

Fixes https://github.com/BerriAI/litellm/issues/7169

* feat(router.py): add stripped model check to model fallback search

if model_name="openai/gpt-3.5-turbo" and fallback=[{"gpt-3.5-turbo"..}] the fallback should just work as expected

* fix: fix linting error

* fix(factory.py): fix linting error

* fix(factory.py): in base case still support skip empty text blocks

---------

Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2024-12-13 12:49:57 -08:00
..
router_callbacks (testing) Router add testing coverage (#6253) 2024-10-16 07:32:27 -07:00
batch_utils.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
client_initalization_utils.py fix use consistent naming (#7092) 2024-12-07 22:01:00 -08:00
cooldown_cache.py LiteLLM Minor Fixes & Improvements (10/28/2024) (#6475) 2024-10-29 17:20:24 -07:00
cooldown_callbacks.py (feat) provider budget routing improvements (#6827) 2024-11-19 21:25:08 -08:00
cooldown_handlers.py redis otel tracing + async support for latency routing (#6452) 2024-10-28 21:52:12 -07:00
fallback_event_handlers.py Litellm dev 12 11 2024 v2 (#7215) 2024-12-13 12:49:57 -08:00
get_retry_from_policy.py Litellm dev 12 06 2024 (#7067) 2024-12-06 22:44:18 -08:00
handle_error.py (UI) Fix viewing members, keys in a team + added testing (#6514) 2024-10-30 23:51:13 +05:30
pattern_match_deployments.py Litellm dev 12 12 2024 (#7203) 2024-12-13 08:54:03 -08:00
prompt_caching_cache.py Litellm dev 12 07 2024 (#7086) 2024-12-08 00:30:33 -08:00
response_headers.py LiteLLM Minor Fixes & Improvements (11/26/2024) (#6913) 2024-11-28 00:01:38 +05:30