* fix(main.py): pass default azure api version as alternative in completion call
Fixes api error caused due to api version
Closes https://github.com/BerriAI/litellm/issues/5584
* Fixed gemini-1.5-flash pricing (#5590)
* add /key/list endpoint
* bump: version 1.44.21 → 1.44.22
* docs architecture
* Fixed gemini-1.5-flash pricing
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix(bedrock/chat.py): fix converse api stop sequence param mapping
Fixes https://github.com/BerriAI/litellm/issues/5592
* fix(databricks/cost_calculator.py): handle databricks model name changes
Fixes https://github.com/BerriAI/litellm/issues/5597
* fix(azure.py): support azure api version 2024-08-01-preview
Closes https://github.com/BerriAI/litellm/issues/5377
* fix(proxy/_types.py): allow dev keys to call cohere /rerank endpoint
Fixes issue where only admin could call rerank endpoint
* fix(azure.py): check if model is gpt-4o
* fix(proxy/_types.py): support /v1/rerank on non-admin routes as well
* fix(cost_calculator.py): fix split on `/` logic in cost calculator
---------
Co-authored-by: F1bos <44951186+F1bos@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* feat: initial commit
* fix(proxy/auth/auth_utils.py): Allow client-side credentials to be given to the proxy (accept only if complete credentials are given)
* fix(utils.py): return citations for perplexity streaming
Fixes https://github.com/BerriAI/litellm/issues/5535
* fix(anthropic/chat.py): support fallbacks for anthropic streaming (#5542)
* fix(anthropic/chat.py): support fallbacks for anthropic streaming
Fixes https://github.com/BerriAI/litellm/issues/5512
* fix(anthropic/chat.py): use module level http client if none given (prevents early client closure)
* fix: fix linting errors
* fix(http_handler.py): fix raise_for_status error handling
* test: retry flaky test
* fix otel type
* fix(bedrock/embed): fix error raising
* test(test_openai_batches_and_files.py): skip azure batches test (for now) quota exceeded
* fix(test_router.py): skip azure batch route test (for now) - hit batch quota limits
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* All `model_group_alias` should show up in `/models`, `/model/info` , `/model_group/info` (#5539)
* fix(router.py): support returning model_alias model names in `/v1/models`
* fix(proxy_server.py): support returning model alias'es on `/model/info`
* feat(router.py): support returning model group alias for `/model_group/info`
* fix(proxy_server.py): fix linting errors
* fix(proxy_server.py): fix linting errors
* build(model_prices_and_context_window.json): add amazon titan text premier pricing information
Closes https://github.com/BerriAI/litellm/issues/5560
* feat(litellm_logging.py): log standard logging response object for pass through endpoints. Allows bedrock /invoke agent calls to be correctly logged to langfuse + s3
* fix(success_handler.py): fix linting error
* fix(success_handler.py): fix linting errors
* fix(team_endpoints.py): Allows admin to update team member budgets
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix(vertex_ai): Fixes issue where multimodal message without text was failing vertex calls
Fixes https://github.com/BerriAI/litellm/issues/5515
* fix(azure.py): move to using httphandler for oidc token calls
Fixes issue where ssl certificates weren't being picked up as expected
Closes https://github.com/BerriAI/litellm/issues/5522
* feat: Allows admin to set a default_max_internal_user_budget in config, and allow setting more specific values as env vars
* fix(proxy_server.py): fix read for max_internal_user_budget
* build(model_prices_and_context_window.json): add regional gpt-4o-2024-08-06 pricing
Closes https://github.com/BerriAI/litellm/issues/5540
* test: skip re-test
* Minor IAM AWS OIDC Improvements (#5246)
* AWS IAM: Temporary tokens are valid across all regions after being issued, so it is wasteful to request one for each region.
* AWS IAM: Include an inline policy, to help reduce misuse of overly permissive IAM roles.
* (test_bedrock_completion.py): Ensure we are testing cross AWS region OIDC flow.
* fix(router.py): log rejected requests
Fixes https://github.com/BerriAI/litellm/issues/5498
* refactor: don't use verbose_logger.exception, if exception is raised
User might already have handling for this. But alerting systems in prod will raise this as an unhandled error.
* fix(datadog.py): support setting datadog source as an env var
Fixes https://github.com/BerriAI/litellm/issues/5508
* docs(logging.md): add dd_source to datadog docs
* fix(proxy_server.py): expose `/customer/list` endpoint for showing all customers
* (bedrock): Fix usage with Cloudflare AI Gateway, and proxies in general. (#5509)
* feat(anthropic.py): support 'cache_control' param for content when it is a string
* Revert "(bedrock): Fix usage with Cloudflare AI Gateway, and proxies in gener…" (#5519)
This reverts commit 3fac0349c2.
* refactor: ci/cd run again
---------
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>