* feat(bedrock/rerank): infer model region if model given as arn
* test: add unit testing to ensure bedrock region name inferred from arn on rerank
* feat(bedrock/rerank/transformation.py): include search units for bedrock rerank result
Resolves https://github.com/BerriAI/litellm/issues/7258#issuecomment-2671557137
* test(test_bedrock_completion.py): add testing for bedrock cohere rerank
* feat(cost_calculator.py): refactor rerank cost tracking to support bedrock cost tracking
* build(model_prices_and_context_window.json): add amazon.rerank model to model cost map
* fix(cost_calculator.py): bedrock/common_utils.py
get base model from model w/ arn -> handles rerank model
* build(model_prices_and_context_window.json): add bedrock cohere rerank pricing
* feat(bedrock/rerank): migrate bedrock config to basererank config
* Revert "feat(bedrock/rerank): migrate bedrock config to basererank config"
This reverts commit 84fae1f167.
* test: add testing to ensure large doc / queries are correctly counted
* Revert "test: add testing to ensure large doc / queries are correctly counted"
This reverts commit 4337f1657e.
* fix(migrate-jina-ai-to-rerank-config): enables cost tracking
* refactor(jina_ai/): finish migrating jina ai to base rerank config
enables cost tracking
* fix(jina_ai/rerank): e2e jina ai rerank cost tracking
* fix: cleanup dead code
* fix: fix python3.8 compatibility error
* test: fix test
* test: add e2e testing for azure ai rerank
* fix: fix linting error
* test: mark cohere as flaky
* fix(team_endpoints.py): allow team member to view team info
* test: handle model overloaded in tool calling test
* test: handle internal server error
* fix(team_endpoints.py): cleanup user <-> team association on team delete
Fixes issue where user table still listed team membership post delete
* test(test_team.py): update e2e test - ensure user/team membership is deleted on team delete
* fix(base_invoke_transformation.py): fix deepseek r1 transformation
remove deepseek name from model url
* test(test_completion.py): assert model route not in url
* feat(base_invoke_transformation.py): infer region name from model arn
prevent errors due to different region name in env var vs. model arn, respect if explicitly set in call though
* test: fix test
* test: skip on internal server error
* fix(azure/chat/gpt_transformation.py): add 'prediction' as a support azure param
Closes https://github.com/BerriAI/litellm/issues/8500
* build(model_prices_and_context_window.json): add new 'gemini-2.0-pro-exp-02-05' model
* style: cleanup invalid json trailing commma
* feat(utils.py): support passing 'tokenizer_config' to register_prompt_template
enables passing complete tokenizer config of model to litellm
Allows calling deepseek on bedrock with the correct prompt template
* fix(utils.py): fix register_prompt_template for custom model names
* test(test_prompt_factory.py): fix test
* test(test_completion.py): add e2e test for bedrock invoke deepseek ft model
* feat(base_invoke_transformation.py): support hf_model_name param for bedrock invoke calls
enables proxy admin to set base model for ft bedrock deepseek model
* feat(bedrock/invoke): support deepseek_r1 route for bedrock
makes it easy to apply the right chat template to that call
* feat(constants.py): store deepseek r1 chat template - allow user to get correct response from deepseek r1 without extra work
* test(test_completion.py): add e2e mock test for bedrock deepseek
* docs(bedrock.md): document new deepseek_r1 route for bedrock
allows us to use the right config
* fix(exception_mapping_utils.py): catch read operation timeout
* fix(utils.py): fix vertex ai optional param handling
don't pass max retries to unsupported route
Fixes https://github.com/BerriAI/litellm/issues/8254
* fix(get_supported_openai_params.py): fix linting error
* fix(get_supported_openai_params.py): default to openai-like spec
* test: fix test
* fix: fix linting error
* Improved wildcard route handling on `/models` and `/model_group/info` (#8473)
* fix(model_checks.py): update returning known model from wildcard to filter based on given model prefix
ensures wildcard route - `vertex_ai/gemini-*` just returns known vertex_ai/gemini- models
* test(test_proxy_utils.py): add unit testing for new 'get_known_models_from_wildcard' helper
* test(test_models.py): add e2e testing for `/model_group/info` endpoint
* feat(prometheus.py): support tracking total requests by user_email on prometheus
adds initial support for tracking total requests by user_email
* test(test_prometheus.py): add testing to ensure user email is always tracked
* test: update testing for new prometheus metric
* test(test_prometheus_unit_tests.py): add user email to total proxy metric
* test: update tests
* test: fix spend tests
* test: fix test
* fix(pagerduty.py): fix linting error
* (Bug fix) - Using `include_usage` for /completions requests + unit testing (#8484)
* pass stream options (#8419)
* test_completion_streaming_usage_metrics
* test_text_completion_include_usage
---------
Co-authored-by: Kaushik Deka <55996465+Kaushikdkrikhanu@users.noreply.github.com>
* fix naming docker stable release
* build(model_prices_and_context_window.json): handle azure model update
* docs(token_auth.md): clarify scopes can be a list or comma separated string
* docs: fix docs
* add sonar pricings (#8476)
* add sonar pricings
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window_backup.json
* update load testing script
* fix test_async_router_context_window_fallback
* pplx - fix supports tool choice openai param (#8496)
* fix prom check startup (#8492)
* test_async_router_context_window_fallback
* ci(config.yml): mark daily docker builds with `-nightly` (#8499)
Resolves https://github.com/BerriAI/litellm/discussions/8495
* (Redis Cluster) - Fixes for using redis cluster + pipeline (#8442)
* update RedisCluster creation
* update RedisClusterCache
* add redis ClusterCache
* update async_set_cache_pipeline
* cleanup redis cluster usage
* fix redis pipeline
* test_init_async_client_returns_same_instance
* fix redis cluster
* update mypy_path
* fix init_redis_cluster
* remove stub
* test redis commit
* ClusterPipeline
* fix import
* RedisCluster import
* fix redis cluster
* Potential fix for code scanning alert no. 2129: Clear-text logging of sensitive information
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* fix naming of redis cluster integration
* test_redis_caching_ttl_pipeline
* fix async_set_cache_pipeline
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* Litellm UI stable version 02 12 2025 (#8497)
* fix(key_management_endpoints.py): fix `/key/list` to include `return_full_object` as a top-level query param
Allows user to specify they want the keys as a list of objects
* refactor(key_list.tsx): initial refactor of key table in user dashboard
offloads key filtering logic to backend api
prevents common error of user not being able to see their keys
* fix(key_management_endpoints.py): allow internal user to query `/key/list` to see their keys
* fix(key_management_endpoints.py): add validation checks and filtering to `/key/list` endpoint
allow internal user to see their keys. not anybody else's
* fix(view_key_table.tsx): fix issue where internal user could not see default team keys
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* test_supports_tool_choice
* test_async_router_context_window_fallback
* fix: fix test (#8501)
* Litellm dev 02 12 2025 p1 (#8494)
* Resolves https://github.com/BerriAI/litellm/issues/6625 (#8459)
- enables no auth for SMTP
Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
* add sonar pricings (#8476)
* add sonar pricings
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window_backup.json
* test: fix test
---------
Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
Co-authored-by: Dani Regli <1daniregli@gmail.com>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
* test: fix test
* UI Fixes p2 (#8502)
* refactor(admin.tsx): cleanup add new admin flow
removes buggy flow. Ensures just 1 simple way to add users / update roles.
* fix(user_search_modal.tsx): ensure 'add member' button is always visible
* fix(edit_membership.tsx): ensure 'save changes' button always visible
* fix(internal_user_endpoints.py): ensure user in org can be deleted
Fixes issue where user couldn't be deleted if they were a member of an org
* fix: fix linting error
* add phoenix docs for observability integration (#8522)
* Add files via upload
* Update arize_integration.md
* Update arize_integration.md
* add Phoenix docs
* Added custom_attributes to additional_keys which can be sent to athina (#8518)
* (UI) fix log details page (#8524)
* rollback changes to view logs page
* ui new build
* add interface for prefetch
* fix spread operation
* fix max size for request view page
* clean up table
* ui fix column on request logs page
* ui new build
* Add UI Support for Admins to Call /cache/ping and View Cache Analytics (#8475) (#8519)
* [Bug] UI: Newly created key does not display on the View Key Page (#8039)
- Fixed issue where all keys appeared blank for admin users.
- Implemented filtering of data via team settings to ensure all keys are displayed correctly.
* Fix:
- Updated the validator to allow model editing when `keyTeam.team_alias === "Default Team"`.
- Ensured other teams still follow the original validation rules.
* - added some classes in global.css
- added text wrap in output of request,response and metadata in index.tsx
- fixed styles of table in table.tsx
* - added full payload when we open single log entry
- added Combined Info Card in index.tsx
* fix: keys not showing on refresh for internal user
* merge
* main merge
* cache page
* ca remove
* terms change
* fix:places caching inside exp
---------
Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Kaushik Deka <55996465+Kaushikdkrikhanu@users.noreply.github.com>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: Dani Regli <1daniregli@gmail.com>
Co-authored-by: exiao <exiao@users.noreply.github.com>
Co-authored-by: vivek-athina <153479827+vivek-athina@users.noreply.github.com>
Co-authored-by: Taha Ali <123803932+tahaali-dev@users.noreply.github.com>
* fix(router.py): add more deployment timeout debug information for timeout errors
help understand why some calls in high-traffic don't respect their model-specific timeouts
* test(test_convert_dict_to_response.py): unit test ensuring empty str is not converted to None
Addresses https://github.com/BerriAI/litellm/issues/8507
* fix(convert_dict_to_response.py): handle empty message str - don't return back as 'None'
Fixes https://github.com/BerriAI/litellm/issues/8507
* test(test_completion.py): add e2e test
* add back streaming for base o3 (#8361)
* test(base_llm_unit_tests.py): add base test for o-series models - ensure streaming always works
* fix(base_llm_unit_tests.py): fix test for o series models
* refactor: move test
---------
Co-authored-by: Matteo Boschini <12133566+mbosc@users.noreply.github.com>
* fix(azure.py): ensure max_retries=0 is respected
Fixes https://github.com/BerriAI/litellm/issues/6129
* fix(test_openai.py): add unit test to ensure openai sdk calls always respect max_retries = 0
* test(test_azure_openai.py): add unit testing for azure_text/ route
* fix(azure.py): fix passing max retries on streaming
* fix(azure.py): fix azure max retries on async completion + streaming
* fix(completion/handler.py): fix azure text async completion + streaming
* test(test_azure_openai.py): ensure azure openai max retries always respected
* test(test_azure_o_series.py): add testing to ensure max retries always respected
* Added gemini providers for 2.0-flash and 2.0-flash lite (#8321)
* Update model_prices_and_context_window.json
added gemini providers for 2.0-flash and 2.0-flash light
* Update model_prices_and_context_window.json
fixed URL
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
* Convert tool use arguments to string before counting tokens (#6989)
In at least some cases the `messages["tool_calls"]["function"]["arguments"]` is a dict, not a string. In order to tokenize it properly it needs to be a string. In the case that it is already a string this is a noop, which is also fine.
* build(model_prices_and_context_window.json): add gemini 2.0 flash lite pricing
* build(model_prices_and_context_window.json): add gemini commercial rate limits
* fix(utils.py): fix linting error
* refactor(utils.py): refactor to maintain function size
---------
Co-authored-by: Bardia Khosravi <bardiakhosravi95@gmail.com>
Co-authored-by: Josh Morrow <josh@jcmorrow.com>
* fix(convert_dict_to_response.py): only convert if response is the response_format tool call passed in
Fixes https://github.com/BerriAI/litellm/issues/8241
* fix(gpt_transformation.py): makes sure response format / tools conversion doesn't remove previous tool calls
* refactor(gpt_transformation.py): refactor out json schema converstion to base config
keeps logic consistent across providers
* fix(o_series_transformation.py): support o3 mini native streaming
Fixes https://github.com/BerriAI/litellm/issues/8274
* fix(gpt_transformation.py): remove unused variables
* test: update test
* initial transform for invoke
* invoke transform_response
* working - able to make request
* working get_complete_url
* working - invoke now runs on llm_http_handler
* fix unused imports
* track litellm overhead ms
* working stream request
* sign_request transform
* sign_request update
* use has_async_custom_stream_wrapper property
* use get_async_custom_stream_wrapper in base llm http handler
* fix make_call in invoke handler
* fix invoke with streaming get_async_custom_stream_wrapper
* working bedrock async streaming with invoke
* fix make call handler for bedrock
* test_all_model_configs
* fix test_bedrock_custom_prompt_template
* sync streaming for bedrock invoke
* fix _add_stream_param_to_request_body
* test_async_text_completion_bedrock
* fix transform_request
* fix get_supported_openai_params
* fix test supports tool choice
* fix test_supports_tool_choice
* add unit test coverage for bedrock invoke transform
* fix location of transformation files
* update import loc
* fix bedrock invoke unit tests
* fix import for max completion tokens
* refactor(deepseek/): move deepseek to base llm http handler
Fixes https://github.com/BerriAI/litellm/issues/8128#issuecomment-2635430457
* fix(gpt_transformation.py): support stream parsing for gpt-like calls
* test(test_deepseek_completion.py): add async streaming test
* fix(gpt_transformation.py): fix import
* fix(gpt_transformation.py): return full api base and content type
* feat(proxy/_types.py): add new jwt field params
allows users + services to auth into proxy
* feat(handle_jwt.py): allow team role proxy access
allows proxy admin to set allowed team roles
* fix(proxy/_types.py): add 'routes' to role based permissions
allow proxy admin to restrict what routes a team can access easily
* feat(handle_jwt.py): support more flexible role based route access
v2 on role based 'allowed_routes'
* test(test_jwt.py): add unit test for rbac for proxy routes
* feat(handle_jwt.py): ensure cost tracking always works for any jwt request with `enforce_rbac=True`
* docs(token_auth.md): add documentation on controlling model access via OIDC Roles
* test: increase time delay before retrying
* test: handle model overloaded for test
* test(base_llm_unit_tests.py): add test to ensure drop params is respected
* fix(types/prometheus.py): use typing_extensions for python3.8 compatibility
* build: add cherry picked commits
* fix(vertex_ai/gemini/transformation.py): handle 'http://' image urls
* test: add base test for `http:` url's
* fix(factory.py/get_image_details): follow redirects
allows http calls to work
* fix(codestral/): fix stream chunk parsing on last chunk of stream
* Azure ad token provider (#6917)
* Update azure.py
Added optional parameter azure ad token provider
* Added parameter to main.py
* Found token provider arg location
* Fixed embeddings
* Fixed ad token provider
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
* fix: fix linting errors
* fix(main.py): leave out o1 route for azure ad token provider, for now
get v0 out for sync azure gpt route to begin with
* test: skip http:// test for fireworks ai
model does not support it
* refactor: cleanup dead code
* fix: revert http:// url passthrough for gemini
google ai studio raises errors
* test: fix test
---------
Co-authored-by: bahtman <anton@baht.dk>
* fix(ui_sso.py): use common `get_user_object` logic across jwt + ui sso auth
Allows finding users by their email, and attaching the sso user id to the user if found
* Improve Team Management flow on UI (#8204)
* build(teams.tsx): refactor teams page to make it easier to add members to a team
make a row in table clickable -> allows user to add users to team they intended
* build(teams.tsx): make it clear user should click on team id to view team details
simplifies team management by putting team details on separate page
* build(team_info.tsx): separately show user id and user email
make it easy for user to understand the information they're seeing
* build(team_info.tsx): add back in 'add member' button
* build(team_info.tsx): working team member update on team_info.tsx
* build(team_info.tsx): enable team member delete on ui
allow user to delete accidental adds
* build(internal_user_endpoints.py): expose new endpoint for ui to allow filtering on user table
allows proxy admin to quickly find user they're looking for
* feat(team_endpoints.py): expose new team filter endpoint for ui
allows proxy admin to easily find team they're looking for
* feat(user_search_modal.tsx): allow admin to filter on users when adding new user to teams
* test: mark flaky test
* test: mark flaky test
* fix(exception_mapping_utils.py): fix anthropic text route error
* fix(ui_sso.py): handle situation when user not in db
* fix(o_series_transformation.py): add 'reasoning_effort' as o series model param
Closes https://github.com/BerriAI/litellm/issues/8182
* fix(main.py): ensure `reasoning_effort` is a mapped openai param
* refactor(azure/): rename o1_[x] files to o_series_[x]
* refactor(base_llm_unit_tests.py): refactor testing for o series reasoning effort
* test(test_azure_o_series.py): have azure o series tests correctly inherit from base o series model tests
* feat(base_utils.py): support translating 'developer' role to 'system' role for non-openai providers
Makes it easy to switch from openai to anthropic
* fix: fix linting errors
* fix(base_llm_unit_tests.py): fix test
* fix(main.py): add missing param
* fix: support azure o3 model family for fake streaming workaround (#8162)
* fix: support azure o3 model family for fake streaming workaround
* refactor: rename helper to is_o_series_model for clarity
* update function calling parameters for o3 models (#8178)
* refactor(o1_transformation.py): refactor o1 config to be o series config, expand o series model check to o3
ensures max_tokens is correctly translated for o3
* feat(openai/): refactor o1 files to be 'o_series' files
expands naming to cover o3
* fix(azure/chat/o1_handler.py): azure openai is an instance of openai - was causing resets
* test(test_azure_o_series.py): assert stream faked for azure o3 mini
Resolves https://github.com/BerriAI/litellm/pull/8162
* fix(o1_transformation.py): fix o1 transformation logic to handle explicit o1_series routing
* docs(azure.md): update doc with `o_series/` model name
---------
Co-authored-by: byrongrogan <47910641+byrongrogan@users.noreply.github.com>
Co-authored-by: Low Jian Sheng <15527690+lowjiansheng@users.noreply.github.com>
* add support for using llama spec with bedrock
* fix get_bedrock_invoke_provider
* add support for using bedrock provider in mappings
* working request
* test_bedrock_custom_deepseek
* test_bedrock_custom_deepseek
* fix _get_model_id_for_llama_like_model
* test_bedrock_custom_deepseek
* doc DeepSeek-R1-Distill-Llama-70B
* test_bedrock_custom_deepseek
* feat(lowest_tpm_rpm_v2.py): fix redis cache check to use >= instead of >
makes it consistent
* test(test_custom_guardrails.py): add more unit testing on default on guardrails
ensure it runs if user sent guardrail list is empty
* docs(quick_start.md): clarify default on guardrails run even if user guardrails list contains other guardrails
* refactor(litellm_logging.py): refactor no-log to helper util
allows for more consistent behavior
* feat(litellm_logging.py): add event hook to verbose logs
* fix(litellm_logging.py): add unit testing to ensure `litellm.disable_no_log_param` is respected
* docs(logging.md): document how to disable 'no-log' param
* test: fix test to handle feb
* test: cleanup old bedrock model
* fix: fix router check
* docs: cleanup doc
* feat(bedrock/): initial commit adding bedrock/converse_like/<model> route support
allows routing to a converse like endpoint
Resolves https://github.com/BerriAI/litellm/issues/8085
* feat(bedrock/chat/converse_transformation.py): make converse config base config compatible
enables new 'converse_like' route
* feat(converse_transformation.py): enables using the proxy with converse like api endpoint
Resolves https://github.com/BerriAI/litellm/issues/8085
* refactor(factory.py): refactor async bedrock message transformation to use async get request for image url conversion
improve latency of bedrock call
* test(test_bedrock_completion.py): add unit testing to ensure async image url get called for async bedrock call
* refactor(factory.py): refactor bedrock translation to use BedrockImageProcessor
reduces duplicate code
* fix(factory.py): fix bug not allowing pdf's to be processed
* fix(factory.py): fix bedrock converse document understanding with image url
* docs(bedrock.md): clarify all bedrock document types are supported
* refactor: cleanup redundant test + unused imports
* perf: improve perf with reusable clients
* test: fix test
* fix(base_utils.py): supported nested json schema passed in for anthropic calls
* refactor(base_utils.py): refactor ref parsing to prevent infinite loop
* test(test_openai_endpoints.py): refactor anthropic test to use bedrock
* fix(langfuse_prompt_management.py): add unit test for sync langfuse calls
Resolves https://github.com/BerriAI/litellm/issues/7938#issuecomment-2613293757
* fix(bedrock/converse_handler.py): fix bedrock region name on async calls
* fix(utils.py): fix split model handling
Fixes bedrock cost calculation when region name is given
* feat(_health_endpoints.py): support health checking datadog integration
Closes https://github.com/BerriAI/litellm/issues/7921
* fix(types/utils.py): support returning 'reasoning_content' for deepseek models
Fixes https://github.com/BerriAI/litellm/issues/7877#issuecomment-2603813218
* fix(convert_dict_to_response.py): return deepseek response in provider_specific_field
allows for separating openai vs. non-openai params in model response
* fix(utils.py): support 'provider_specific_field' in delta chunk as well
allows deepseek reasoning content chunk to be returned to user from stream as well
Fixes https://github.com/BerriAI/litellm/issues/7877#issuecomment-2603813218
* fix(watsonx/chat/handler.py): fix passing space id to watsonx on chat route
* fix(watsonx/): fix watsonx_text/ route with space id
* fix(watsonx/): qa item - also adds better unit testing for watsonx embedding calls
* fix(utils.py): rename to '..fields'
* fix: fix linting errors
* fix(utils.py): fix typing - don't show provider-specific field if none or empty - prevents default respons
e from being non-oai compatible
* fix: cleanup unused imports
* docs(deepseek.md): add docs for deepseek reasoning model
* fix(proxy_server.py): fix get model info when litellm_model_id is set
Fixes https://github.com/BerriAI/litellm/issues/7873
* test(test_models.py): add test to ensure get model info on specific deployment has same value as all model info
Fixes https://github.com/BerriAI/litellm/issues/7873
* fix(usage.tsx): make model analytics free
Fixes @iqballx's feedback
* fix(fix(invoke_handler.py):-fix-bedrock-error-chunk-parsing): return correct bedrock status code and error message if chunk in stream
Improves bedrock stream error handling
* fix(proxy_server.py): fix linting errors
* test(test_auth_checks.py): remove redundant test
* fix(proxy_server.py): fix linting errors
* test: fix flaky test
* test: fix test
* fix(router.py): pass stream timeout correctly for non openai / azure models
Fixes https://github.com/BerriAI/litellm/issues/7870
* test(test_router_timeout.py): add test for streaming
* test(test_router_timeout.py): add unit testing for new router functions
* docs(ollama.md): link to section on calling ollama within docker container
* test: remove redundant test
* test: fix test to include timeout value
* docs(config_settings.md): document new router settings param