(Refactor) - migrate bedrock invoke to BaseLLMHTTPHandler class (#8290)

* initial transform for invoke

* invoke transform_response

* working - able to make request

* working get_complete_url

* working - invoke now runs on llm_http_handler

* fix unused imports

* track litellm overhead ms

* working stream request

* sign_request transform

* sign_request update

* use has_async_custom_stream_wrapper property

* use get_async_custom_stream_wrapper in base llm http handler

* fix make_call in invoke handler

* fix invoke with streaming get_async_custom_stream_wrapper

* working bedrock async streaming with invoke

* fix make call handler for bedrock

* test_all_model_configs

* fix test_bedrock_custom_prompt_template

* sync streaming for bedrock invoke

* fix _add_stream_param_to_request_body

* test_async_text_completion_bedrock

* fix transform_request

* fix get_supported_openai_params

* fix test supports tool choice

* fix test_supports_tool_choice

* add unit test coverage for bedrock invoke transform

* fix location of transformation files

* update import loc

* fix bedrock invoke unit tests

* fix import for max completion tokens
This commit is contained in:
Ishaan Jaff 2025-02-05 18:58:55 -08:00 committed by GitHub
parent 3f206cc2b4
commit 8e0736d5ad
22 changed files with 1870 additions and 737 deletions

View file

@ -6077,6 +6077,8 @@ class ProviderConfigManager:
return litellm.AmazonCohereConfig()
elif bedrock_provider == "mistral": # mistral models on bedrock
return litellm.AmazonMistralConfig()
else:
return litellm.AmazonInvokeConfig()
return litellm.OpenAIGPTConfig()
@staticmethod