* build(pyproject.toml): add new dev dependencies - for type checking
* build: reformat files to fit black
* ci: reformat to fit black
* ci(test-litellm.yml): make tests run clear
* build(pyproject.toml): add ruff
* fix: fix ruff checks
* build(mypy/): fix mypy linting errors
* fix(hashicorp_secret_manager.py): fix passing cert for tls auth
* build(mypy/): resolve all mypy errors
* test: update test
* fix: fix black formatting
* build(pre-commit-config.yaml): use poetry run black
* fix(proxy_server.py): fix linting error
* fix: fix ruff safe representation error
* fix: initial commit for adding provider model discovery to gemini
* feat(gemini/): add model discovery for gemini/ route
* docs(set_keys.md): update docs to show you can check available gemini models as well
* feat(anthropic/): add model discovery for anthropic api key
* feat(xai/): add model discovery for XAI
enables checking what models an xai key can call
* ci: bump ci config yml
* fix(topaz/common_utils.py): fix linting error
* fix: fix linting error for python38
* refactor: introduce new transformation config for gpt-4o-transcribe models
* refactor: expose new transformation configs for audio transcription
* ci: fix config yml
* feat(openai/transcriptions): support provider config transformation on openai audio transcriptions
allows gpt-4o and whisper audio transformation to work as expected
* refactor: migrate fireworks ai + deepgram to new transform request pattern
* feat(openai/): working support for gpt-4o-audio-transcribe
* build(model_prices_and_context_window.json): add gpt-4o-transcribe to model cost map
* build(model_prices_and_context_window.json): specify what endpoints are supported for `/audio/transcriptions`
* fix(get_supported_openai_params.py): fix return
* refactor(deepgram/): migrate unit test to deepgram handler
* refactor: cleanup unused imports
* fix(get_supported_openai_params.py): fix linting error
* test: update test
* fix(anthropic_claude3_transformation.py): fix amazon anthropic claude 3 tool calling transformation on invoke route
move to using anthropic config as base
* fix(utils.py): expose anthropic config via providerconfigmanager
* fix(llm_http_handler.py): support json mode on async completion calls
* fix(invoke_handler/make_call): support json mode for anthropic called via bedrock invoke
* fix(anthropic/): handle 'response_format: {"type": "text"}` + migrate amazon claude 3 invoke config to inherit from anthropic config
Prevents error when passing in 'response_format: {"type": "text"}
* test: fix test
* fix(utils.py): fix base invoke provider check
* fix(anthropic_claude3_transformation.py): don't pass 'stream' param
* fix: fix linting errors
* fix(converse_transformation.py): handle response_format type=text for converse
* Adding VertexAI Claude 3.7 Sonnet (#8774)
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
* build(model_prices_and_context_window.json): add anthropic 3-7 models on vertex ai and bedrock
* Support video_url (#8743)
* Support video_url
Support VLMs that works with video.
Example implemenation in vllm: https://github.com/vllm-project/vllm/pull/10020
* llms openai.py: Add ChatCompletionVideoObject
Add data structures to support `video_url` in chat completion
* test test_completion.py: add test for video_url
* Arize Phoenix - ensure correct endpoint/protocol are used; and default to phoenix cloud (#8750)
* minor fixes to default to http and to ensure that the correct endpoint is used
* Update test_arize_phoenix.py
* prioritize http over grpc
---------
Co-authored-by: Emerson Gomes <emerson.gomes@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Pang Wu <104795337+pang-wu@users.noreply.github.com>
Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>
* feat(bedrock/rerank): infer model region if model given as arn
* test: add unit testing to ensure bedrock region name inferred from arn on rerank
* feat(bedrock/rerank/transformation.py): include search units for bedrock rerank result
Resolves https://github.com/BerriAI/litellm/issues/7258#issuecomment-2671557137
* test(test_bedrock_completion.py): add testing for bedrock cohere rerank
* feat(cost_calculator.py): refactor rerank cost tracking to support bedrock cost tracking
* build(model_prices_and_context_window.json): add amazon.rerank model to model cost map
* fix(cost_calculator.py): bedrock/common_utils.py
get base model from model w/ arn -> handles rerank model
* build(model_prices_and_context_window.json): add bedrock cohere rerank pricing
* feat(bedrock/rerank): migrate bedrock config to basererank config
* Revert "feat(bedrock/rerank): migrate bedrock config to basererank config"
This reverts commit 84fae1f167.
* test: add testing to ensure large doc / queries are correctly counted
* Revert "test: add testing to ensure large doc / queries are correctly counted"
This reverts commit 4337f1657e.
* fix(migrate-jina-ai-to-rerank-config): enables cost tracking
* refactor(jina_ai/): finish migrating jina ai to base rerank config
enables cost tracking
* fix(jina_ai/rerank): e2e jina ai rerank cost tracking
* fix: cleanup dead code
* fix: fix python3.8 compatibility error
* test: fix test
* test: add e2e testing for azure ai rerank
* fix: fix linting error
* test: mark cohere as flaky
* fix(azure/chat/gpt_transformation.py): add 'prediction' as a support azure param
Closes https://github.com/BerriAI/litellm/issues/8500
* build(model_prices_and_context_window.json): add new 'gemini-2.0-pro-exp-02-05' model
* style: cleanup invalid json trailing commma
* feat(utils.py): support passing 'tokenizer_config' to register_prompt_template
enables passing complete tokenizer config of model to litellm
Allows calling deepseek on bedrock with the correct prompt template
* fix(utils.py): fix register_prompt_template for custom model names
* test(test_prompt_factory.py): fix test
* test(test_completion.py): add e2e test for bedrock invoke deepseek ft model
* feat(base_invoke_transformation.py): support hf_model_name param for bedrock invoke calls
enables proxy admin to set base model for ft bedrock deepseek model
* feat(bedrock/invoke): support deepseek_r1 route for bedrock
makes it easy to apply the right chat template to that call
* feat(constants.py): store deepseek r1 chat template - allow user to get correct response from deepseek r1 without extra work
* test(test_completion.py): add e2e mock test for bedrock deepseek
* docs(bedrock.md): document new deepseek_r1 route for bedrock
allows us to use the right config
* fix(exception_mapping_utils.py): catch read operation timeout
* initial transform for invoke
* invoke transform_response
* working - able to make request
* working get_complete_url
* working - invoke now runs on llm_http_handler
* fix unused imports
* track litellm overhead ms
* working stream request
* sign_request transform
* sign_request update
* use has_async_custom_stream_wrapper property
* use get_async_custom_stream_wrapper in base llm http handler
* fix make_call in invoke handler
* fix invoke with streaming get_async_custom_stream_wrapper
* working bedrock async streaming with invoke
* fix make call handler for bedrock
* test_all_model_configs
* fix test_bedrock_custom_prompt_template
* sync streaming for bedrock invoke
* fix _add_stream_param_to_request_body
* test_async_text_completion_bedrock
* fix transform_request
* fix get_supported_openai_params
* fix test supports tool choice
* fix test_supports_tool_choice
* add unit test coverage for bedrock invoke transform
* fix location of transformation files
* update import loc
* fix bedrock invoke unit tests
* fix import for max completion tokens