* fix(main.py): pass default azure api version as alternative in completion call
Fixes api error caused due to api version
Closes https://github.com/BerriAI/litellm/issues/5584
* Fixed gemini-1.5-flash pricing (#5590)
* add /key/list endpoint
* bump: version 1.44.21 → 1.44.22
* docs architecture
* Fixed gemini-1.5-flash pricing
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix(bedrock/chat.py): fix converse api stop sequence param mapping
Fixes https://github.com/BerriAI/litellm/issues/5592
* fix(databricks/cost_calculator.py): handle databricks model name changes
Fixes https://github.com/BerriAI/litellm/issues/5597
* fix(azure.py): support azure api version 2024-08-01-preview
Closes https://github.com/BerriAI/litellm/issues/5377
* fix(proxy/_types.py): allow dev keys to call cohere /rerank endpoint
Fixes issue where only admin could call rerank endpoint
* fix(azure.py): check if model is gpt-4o
* fix(proxy/_types.py): support /v1/rerank on non-admin routes as well
* fix(cost_calculator.py): fix split on `/` logic in cost calculator
---------
Co-authored-by: F1bos <44951186+F1bos@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix(vertex_ai): Fixes issue where multimodal message without text was failing vertex calls
Fixes https://github.com/BerriAI/litellm/issues/5515
* fix(azure.py): move to using httphandler for oidc token calls
Fixes issue where ssl certificates weren't being picked up as expected
Closes https://github.com/BerriAI/litellm/issues/5522
* feat: Allows admin to set a default_max_internal_user_budget in config, and allow setting more specific values as env vars
* fix(proxy_server.py): fix read for max_internal_user_budget
* build(model_prices_and_context_window.json): add regional gpt-4o-2024-08-06 pricing
Closes https://github.com/BerriAI/litellm/issues/5540
* test: skip re-test
* feat(router.py): initial commit for loadbalancing azure batch api endpoints
Closes https://github.com/BerriAI/litellm/issues/5396
* fix(router.py): working `router.acreate_file()`
* feat(router.py): working router.acreate_batch endpoint
* feat(router.py): expose router.aretrieve_batch function
Make it easy for user to retrieve the batch information
* feat(router.py): support 'router.alist_batches' endpoint
Adds support for getting all batches across all endpoints
* feat(router.py): working loadbalancing on `/v1/files`
* feat(proxy_server.py): working loadbalancing on `/v1/batches`
* feat(proxy_server.py): working loadbalancing on Retrieve + List batch
* feat(proxy/_types.py): add lago billing to callbacks ui
Closes https://github.com/BerriAI/litellm/issues/5472
* fix(anthropic.py): return anthropic prompt caching information
Fixes https://github.com/BerriAI/litellm/issues/5364
* feat(bedrock/chat.py): support 'json_schema' for bedrock models
Closes https://github.com/BerriAI/litellm/issues/5434
* fix(bedrock/embed/embeddings.py): support async embeddings for amazon titan models
* fix: linting fixes
* fix: handle key errors
* fix(bedrock/chat.py): fix bedrock ai21 streaming object
* feat(bedrock/embed): support bedrock embedding optional params
* fix(databricks.py): fix usage chunk
* fix(internal_user_endpoints.py): apply internal user defaults, if user role updated
Fixes issue where user update wouldn't apply defaults
* feat(slack_alerting.py): provide multiple slack channels for a given alert type
multiple channels might be interested in receiving an alert for a given type
* docs(alerting.md): add multiple channel alerting to docs
* Azure Service Principal with Secret authentication workflow. (#5131)
* Implement Azure Service Principal with Secret authentication workflow.
* Use `ClientSecretCredential` instead of `DefaultAzureCredential`.
* Move imports into the function.
* Add type hint for `azure_ad_token_provider`.
* Add unit test for router initialization and sample completion using Azure Service Principal with Secret authentication workflow.
* Add unit test for router initialization with neither API key nor using Azure Service Principal with Secret authentication workflow.
* fix(client_initializtion_utils.py): fix typing + overrides
* test: fix linting errors
* fix(client_initialization_utils.py): fix client init azure ad token logic
* fix(router_client_initialization.py): add flag check for reading azure ad token from environment
* test(test_streaming.py): skip end of life bedrock model
* test(test_router_client_init.py): add correct flag to test
---------
Co-authored-by: kzych-inpost <142029278+kzych-inpost@users.noreply.github.com>
* refactor(bedrock): initial commit to refactor bedrock to a folder
Improve code readability + maintainability
* refactor: more refactor work
* fix: fix imports
* feat(bedrock/embeddings.py): support translating embedding into amazon embedding formats
* fix: fix linting errors
* test: skip test on end of life model
* fix(cohere/embed.py): fix linting error
* fix(cohere/embed.py): fix typing
* fix(cohere/embed.py): fix post-call logging for cohere embedding call
* test(test_embeddings.py): fix error message assertion in test
* fix(utils.py): support 'drop_params' for embedding requests
Fixes https://github.com/BerriAI/litellm/issues/5444
* feat(anthropic/cost_calculation.py): Support calculating cost for prompt caching on anthropic
* feat(types/utils.py): allows us to migrate to openai's equivalent, once that comes out
* fix: fix linting errors
* test: mark flaky test