* initial transform for invoke
* invoke transform_response
* working - able to make request
* working get_complete_url
* working - invoke now runs on llm_http_handler
* fix unused imports
* track litellm overhead ms
* working stream request
* sign_request transform
* sign_request update
* use has_async_custom_stream_wrapper property
* use get_async_custom_stream_wrapper in base llm http handler
* fix make_call in invoke handler
* fix invoke with streaming get_async_custom_stream_wrapper
* working bedrock async streaming with invoke
* fix make call handler for bedrock
* test_all_model_configs
* fix test_bedrock_custom_prompt_template
* sync streaming for bedrock invoke
* fix _add_stream_param_to_request_body
* test_async_text_completion_bedrock
* fix transform_request
* fix get_supported_openai_params
* fix test supports tool choice
* fix test_supports_tool_choice
* add unit test coverage for bedrock invoke transform
* fix location of transformation files
* update import loc
* fix bedrock invoke unit tests
* fix import for max completion tokens
* add support for using llama spec with bedrock
* fix get_bedrock_invoke_provider
* add support for using bedrock provider in mappings
* working request
* test_bedrock_custom_deepseek
* test_bedrock_custom_deepseek
* fix _get_model_id_for_llama_like_model
* test_bedrock_custom_deepseek
* doc DeepSeek-R1-Distill-Llama-70B
* test_bedrock_custom_deepseek
* fix(proxy_server.py): fix get model info when litellm_model_id is set
Fixes https://github.com/BerriAI/litellm/issues/7873
* test(test_models.py): add test to ensure get model info on specific deployment has same value as all model info
Fixes https://github.com/BerriAI/litellm/issues/7873
* fix(usage.tsx): make model analytics free
Fixes @iqballx's feedback
* fix(fix(invoke_handler.py):-fix-bedrock-error-chunk-parsing): return correct bedrock status code and error message if chunk in stream
Improves bedrock stream error handling
* fix(proxy_server.py): fix linting errors
* test(test_auth_checks.py): remove redundant test
* fix(proxy_server.py): fix linting errors
* test: fix flaky test
* test: fix test
* fix(invoke_handler.py): fix mock response iterator to handle tool calling
returns tool call if returned by model response
* fix(prometheus.py): add new 'tokens_by_tag' metric on prometheus
allows tracking 'token usage' by task
* feat(prometheus.py): add input + output token tracking by tag
* feat(prometheus.py): add tag based deployment failure tracking
allows admin to track failure by use-case
* fix(key_management_endpoints.py): override metadata field value on update
allow user to override tags
* feat(__init__.py): expose new disable_end_user_cost_tracking_prometheus_only metric
allow disabling end user cost tracking on prometheus - fixes cardinality issue
* fix(litellm_pre_call_utils.py): add key/team level enforced params
Fixes https://github.com/BerriAI/litellm/issues/6652
* fix(key_management_endpoints.py): allow user to pass in `enforced_params` as a top level param on /key/generate and /key/update
* docs(enterprise.md): add docs on enforcing required params for llm requests
* Add support of Galadriel API (#7005)
* fix(router.py): robust retry after handling
set retry after time to 0 if >0 healthy deployments. handle base case = 1 deployment
* test(test_router.py): fix test
* feat(bedrock/): add support for 'nova' models
also adds explicit 'converse/' route for simpler routing
* fix: fix 'supports_pdf_input'
return if model supports pdf input on get_model_info
* feat(converse_transformation.py): support bedrock pdf input
* docs(document_understanding.md): add document understanding to docs
* fix(litellm_pre_call_utils.py): fix linting error
* fix(init.py): fix passing of bedrock converse models
* feat(bedrock/converse): support 'response_format={"type": "json_object"}'
* fix(converse_handler.py): fix linting error
* fix(base_llm_unit_tests.py): fix test
* fix: fix test
* test: fix test
* test: fix test
* test: remove duplicate test
---------
Co-authored-by: h4n0 <4738254+h4n0@users.noreply.github.com>
* feat(aws_base_llm.py): prevents recreating boto3 credentials during high traffic
Leads to 100ms perf boost in local testing
* fix(base_aws_llm.py): fix credential caching check to see if token is set
* refactor(bedrock/chat): separate converse api and invoke api + isolate converse api transformation logic
Make it easier to see how requests are transformed for /converse
* fix: fix imports
* fix(bedrock/embed): fix reordering of headers
* fix(base_aws_llm.py): fix get credential logic
* fix(converse_handler.py): fix ai21 streaming response
2024-09-14 23:22:59 -07:00
Renamed from litellm/llms/bedrock/chat.py (Browse further)