* feat(batches/): fix batch cost calculation - ensure it's accurate
use the correct cost value - prev. defaulting to non-batch cost
* feat(batch_utils.py): log batch models to spend logs + standard logging payload
makes it easy to understand how cost was calculated
* fix: fix stored payload for test
* test: fix test
* feat: initial commit - enable dev to see translated request
* feat(utils.py): expose new endpoint - `/utils/transform_request` to see the raw request sent by litellm
* feat(transform_request.tsx): allow user to see their transformed request
* refactor(litellm_logging.py): return raw request in 3 parts - api_base, headers, request body
easier to render each individually on UI vs. extracting from combined string
* feat: transform_request.tsx
working e2e raw request viewing
* fix(litellm_logging.py): fix transform viewing for bedrock models
* fix(litellm_logging.py): don't return sensitive headers in raw request headers
prevent accidental leak
* feat(transform_request.tsx): style improvements
* fix(transformation.py): support a 'format' parameter for image's
allow user to specify mime type
* fix: pass mimetype via 'format' param
* feat(gemini/chat/transformation.py): support 'format' param for gemini
* fix(factory.py): support 'format' param on sync bedrock converse calls
* feat(bedrock/converse_transformation.py): support 'format' param for bedrock async calls
* refactor(factory.py): move to supporting 'format' param in base helper
ensures consistency in param support
* feat(gpt_transformation.py): filter out 'format' param
don't send invalid param to openai
* fix(gpt_transformation.py): fix translation
* fix: fix translation error
* Fix missing signature_delta in thinking blocks when streaming from Claude 3.7 (#8797)
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
* test: update test to enforce signature found
* feat(refactor-signature-param-to-be-'signature'-instead-of-'signature_delta'): keeps it in sync with anthropic
* fix: fix linting error
---------
Co-authored-by: Martin Krasser <krasserm@googlemail.com>
* fix(core_helpers.py): handle litellm_metadata instead of 'metadata'
* feat(batches/): ensure batches logs are written to db
makes batches response dict compatible
* fix(cost_calculator.py): handle batch response being a dictionary
* fix(batches/main.py): modify retrieve endpoints to use @client decorator
enables logging to work on retrieve call
* fix(batches/main.py): fix retrieve batch response type to be 'dict' compatible
* fix(spend_tracking_utils.py): send unique uuid for retrieve batch call type
create batch and retrieve batch share the same id
* fix(spend_tracking_utils.py): prevent duplicate retrieve batch calls from being double counted
* refactor(batches/): refactor cost tracking for batches - do it on retrieve, and within the established litellm_logging pipeline
ensures cost is always logged to db
* fix: fix linting errors
* fix: fix linting error
* fix(streaming_handler.py): fix deepseek reasoning content streaming
Fixes https://github.com/BerriAI/litellm/issues/8939
* test(test_streaming_handler.py): add unit test to streaming handle 'is_chunk_non_empty' function
ensures 'reasoning_content' is handled correctly
* fix test_moderations_bad_model
* use async_post_call_failure_hook
* basic logging errors in DB
* show status on ui
* show status on ui
* ui show request / response side by side
* stash fixes
* working, track raw request
* track error info in metadata
* fix showing error / request / response logs
* show traceback on error viewer
* ui with traceback of error
* fix async_post_call_failure_hook
* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads
* test_get_error_information
* fix code quality
* rename proxy track cost callback test
* _should_store_errors_in_spend_logs
* feature flag error logs
* Revert "_should_store_errors_in_spend_logs"
This reverts commit 7f345df477.
* Revert "feature flag error logs"
This reverts commit 0e90c022bb.
* test_spend_logs_payload
* fix OTEL log_db_metrics
* fix import json
* fix ui linting error
* test_async_post_call_failure_hook
* test_chat_completion_bad_model_with_spend_logs
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* feat(bedrock/converse/transformation.py): support claude-3-7-sonnet reasoning_Content transformation
Closes https://github.com/BerriAI/litellm/issues/8777
* fix(bedrock/): support returning `reasoning_content` on streaming for claude-3-7
Resolves https://github.com/BerriAI/litellm/issues/8777
* feat(bedrock/): unify converse reasoning content blocks for consistency across anthropic and bedrock
* fix(anthropic/chat/transformation.py): handle deepseek-style 'reasoning_content' extraction within transformation.py
simpler logic
* feat(bedrock/): fix streaming to return blocks in consistent format
* fix: fix linting error
* test: fix test
* feat(factory.py): fix bedrock thinking block translation on tool calling
allows passing the thinking blocks back to bedrock for tool calling
* fix(types/utils.py): don't exclude provider_specific_fields on model dump
ensures consistent responses
* fix: fix linting errors
* fix(convert_dict_to_response.py): pass reasoning_content on root
* fix: test
* fix(streaming_handler.py): add helper util for setting model id
* fix(streaming_handler.py): fix setting model id on model response stream chunk
* fix(streaming_handler.py): fix linting error
* fix(streaming_handler.py): fix linting error
* fix(types/utils.py): add provider_specific_fields to model stream response
* fix(streaming_handler.py): copy provider specific fields and add them to the root of the streaming response
* fix(streaming_handler.py): fix check
* fix: fix test
* fix(types/utils.py): ensure messages content is always openai compatible
* fix(types/utils.py): fix delta object to always be openai compatible
only introduce new params if variable exists
* test: fix bedrock nova tests
* test: skip flaky test
* test: skip flaky test in ci/cd
* use class ResetBudgetJob
* refactor reset budget job
* update reset_budget job
* refactor reset budget job
* fix LiteLLM_UserTable
* refactor reset budget job
* add telemetry for reset budget job
* dd - log service success/failure on DD
* add detailed reset budget reset info on DD
* initialize_scheduled_background_jobs
* refactor reset budget job
* trigger service failure hook when fails to reset a budget for team, key, user
* fix resetBudgetJob
* unit testing for ResetBudgetJob
* test_duration_in_seconds_basic
* testing for triggering service logging
* fix logs on test teams fail
* remove unused imports
* fix import duration in s
* duration_in_seconds
* fix(azure/chat/gpt_transformation.py): add 'prediction' as a support azure param
Closes https://github.com/BerriAI/litellm/issues/8500
* build(model_prices_and_context_window.json): add new 'gemini-2.0-pro-exp-02-05' model
* style: cleanup invalid json trailing commma
* feat(utils.py): support passing 'tokenizer_config' to register_prompt_template
enables passing complete tokenizer config of model to litellm
Allows calling deepseek on bedrock with the correct prompt template
* fix(utils.py): fix register_prompt_template for custom model names
* test(test_prompt_factory.py): fix test
* test(test_completion.py): add e2e test for bedrock invoke deepseek ft model
* feat(base_invoke_transformation.py): support hf_model_name param for bedrock invoke calls
enables proxy admin to set base model for ft bedrock deepseek model
* feat(bedrock/invoke): support deepseek_r1 route for bedrock
makes it easy to apply the right chat template to that call
* feat(constants.py): store deepseek r1 chat template - allow user to get correct response from deepseek r1 without extra work
* test(test_completion.py): add e2e mock test for bedrock deepseek
* docs(bedrock.md): document new deepseek_r1 route for bedrock
allows us to use the right config
* fix(exception_mapping_utils.py): catch read operation timeout
* fix(utils.py): fix vertex ai optional param handling
don't pass max retries to unsupported route
Fixes https://github.com/BerriAI/litellm/issues/8254
* fix(get_supported_openai_params.py): fix linting error
* fix(get_supported_openai_params.py): default to openai-like spec
* test: fix test
* fix: fix linting error
* Improved wildcard route handling on `/models` and `/model_group/info` (#8473)
* fix(model_checks.py): update returning known model from wildcard to filter based on given model prefix
ensures wildcard route - `vertex_ai/gemini-*` just returns known vertex_ai/gemini- models
* test(test_proxy_utils.py): add unit testing for new 'get_known_models_from_wildcard' helper
* test(test_models.py): add e2e testing for `/model_group/info` endpoint
* feat(prometheus.py): support tracking total requests by user_email on prometheus
adds initial support for tracking total requests by user_email
* test(test_prometheus.py): add testing to ensure user email is always tracked
* test: update testing for new prometheus metric
* test(test_prometheus_unit_tests.py): add user email to total proxy metric
* test: update tests
* test: fix spend tests
* test: fix test
* fix(pagerduty.py): fix linting error
* (Bug fix) - Using `include_usage` for /completions requests + unit testing (#8484)
* pass stream options (#8419)
* test_completion_streaming_usage_metrics
* test_text_completion_include_usage
---------
Co-authored-by: Kaushik Deka <55996465+Kaushikdkrikhanu@users.noreply.github.com>
* fix naming docker stable release
* build(model_prices_and_context_window.json): handle azure model update
* docs(token_auth.md): clarify scopes can be a list or comma separated string
* docs: fix docs
* add sonar pricings (#8476)
* add sonar pricings
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window_backup.json
* update load testing script
* fix test_async_router_context_window_fallback
* pplx - fix supports tool choice openai param (#8496)
* fix prom check startup (#8492)
* test_async_router_context_window_fallback
* ci(config.yml): mark daily docker builds with `-nightly` (#8499)
Resolves https://github.com/BerriAI/litellm/discussions/8495
* (Redis Cluster) - Fixes for using redis cluster + pipeline (#8442)
* update RedisCluster creation
* update RedisClusterCache
* add redis ClusterCache
* update async_set_cache_pipeline
* cleanup redis cluster usage
* fix redis pipeline
* test_init_async_client_returns_same_instance
* fix redis cluster
* update mypy_path
* fix init_redis_cluster
* remove stub
* test redis commit
* ClusterPipeline
* fix import
* RedisCluster import
* fix redis cluster
* Potential fix for code scanning alert no. 2129: Clear-text logging of sensitive information
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* fix naming of redis cluster integration
* test_redis_caching_ttl_pipeline
* fix async_set_cache_pipeline
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* Litellm UI stable version 02 12 2025 (#8497)
* fix(key_management_endpoints.py): fix `/key/list` to include `return_full_object` as a top-level query param
Allows user to specify they want the keys as a list of objects
* refactor(key_list.tsx): initial refactor of key table in user dashboard
offloads key filtering logic to backend api
prevents common error of user not being able to see their keys
* fix(key_management_endpoints.py): allow internal user to query `/key/list` to see their keys
* fix(key_management_endpoints.py): add validation checks and filtering to `/key/list` endpoint
allow internal user to see their keys. not anybody else's
* fix(view_key_table.tsx): fix issue where internal user could not see default team keys
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* test_supports_tool_choice
* test_async_router_context_window_fallback
* fix: fix test (#8501)
* Litellm dev 02 12 2025 p1 (#8494)
* Resolves https://github.com/BerriAI/litellm/issues/6625 (#8459)
- enables no auth for SMTP
Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
* add sonar pricings (#8476)
* add sonar pricings
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window_backup.json
* test: fix test
---------
Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
Co-authored-by: Dani Regli <1daniregli@gmail.com>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
* test: fix test
* UI Fixes p2 (#8502)
* refactor(admin.tsx): cleanup add new admin flow
removes buggy flow. Ensures just 1 simple way to add users / update roles.
* fix(user_search_modal.tsx): ensure 'add member' button is always visible
* fix(edit_membership.tsx): ensure 'save changes' button always visible
* fix(internal_user_endpoints.py): ensure user in org can be deleted
Fixes issue where user couldn't be deleted if they were a member of an org
* fix: fix linting error
* add phoenix docs for observability integration (#8522)
* Add files via upload
* Update arize_integration.md
* Update arize_integration.md
* add Phoenix docs
* Added custom_attributes to additional_keys which can be sent to athina (#8518)
* (UI) fix log details page (#8524)
* rollback changes to view logs page
* ui new build
* add interface for prefetch
* fix spread operation
* fix max size for request view page
* clean up table
* ui fix column on request logs page
* ui new build
* Add UI Support for Admins to Call /cache/ping and View Cache Analytics (#8475) (#8519)
* [Bug] UI: Newly created key does not display on the View Key Page (#8039)
- Fixed issue where all keys appeared blank for admin users.
- Implemented filtering of data via team settings to ensure all keys are displayed correctly.
* Fix:
- Updated the validator to allow model editing when `keyTeam.team_alias === "Default Team"`.
- Ensured other teams still follow the original validation rules.
* - added some classes in global.css
- added text wrap in output of request,response and metadata in index.tsx
- fixed styles of table in table.tsx
* - added full payload when we open single log entry
- added Combined Info Card in index.tsx
* fix: keys not showing on refresh for internal user
* merge
* main merge
* cache page
* ca remove
* terms change
* fix:places caching inside exp
---------
Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Kaushik Deka <55996465+Kaushikdkrikhanu@users.noreply.github.com>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: Dani Regli <1daniregli@gmail.com>
Co-authored-by: exiao <exiao@users.noreply.github.com>
Co-authored-by: vivek-athina <153479827+vivek-athina@users.noreply.github.com>
Co-authored-by: Taha Ali <123803932+tahaali-dev@users.noreply.github.com>
* fix(router.py): add more deployment timeout debug information for timeout errors
help understand why some calls in high-traffic don't respect their model-specific timeouts
* test(test_convert_dict_to_response.py): unit test ensuring empty str is not converted to None
Addresses https://github.com/BerriAI/litellm/issues/8507
* fix(convert_dict_to_response.py): handle empty message str - don't return back as 'None'
Fixes https://github.com/BerriAI/litellm/issues/8507
* test(test_completion.py): add e2e test
* fix(model_checks.py): update returning known model from wildcard to filter based on given model prefix
ensures wildcard route - `vertex_ai/gemini-*` just returns known vertex_ai/gemini- models
* test(test_proxy_utils.py): add unit testing for new 'get_known_models_from_wildcard' helper
* test(test_models.py): add e2e testing for `/model_group/info` endpoint
* feat(prometheus.py): support tracking total requests by user_email on prometheus
adds initial support for tracking total requests by user_email
* test(test_prometheus.py): add testing to ensure user email is always tracked
* test: update testing for new prometheus metric
* test(test_prometheus_unit_tests.py): add user email to total proxy metric
* test: update tests
* test: fix spend tests
* test: fix test
* fix(pagerduty.py): fix linting error
* fix(litellm_logging.py): support saving applied guardrails in logging object
allows list of applied guardrails to be logged for proxy admin's knowledge
* feat(spend_tracking_utils.py): log applied guardrails to spend logs
makes it easy for admin to know what guardrails were applied on a request
* ci(config.yml): uninstall posthog from ci/cd
* test: fix tests
* test: update test
* Fixed issue #8246 (#8250)
* Fixed issue #8246
* Added unit tests for discard() and for remove_callback_from_list_by_object()
* fix(openai.py): support dynamic passing of organization param to openai
handles scenario where client-side org id is passed to openai
---------
Co-authored-by: Erez Hadad <erezh@il.ibm.com>
* fix(caching_routes.py): mask redis password on `/cache/ping` route
* fix(caching_routes.py): fix linting erro
* fix(caching_routes.py): fix linting error on caching routes
* fix: fix test - ignore mask_dict - has a breakpoint
* fix(azure.py): add timeout param + elapsed time in azure timeout error
* fix(http_handler.py): add elapsed time to http timeout request
makes it easier to debug how long request took before failing
* refactor _get_langfuse_input_output_content
* test_langfuse_logging_completion_with_malformed_llm_response
* fix _get_langfuse_input_output_content
* fixes for langfuse linting
* unit testing for get chat/text content for langfuse
* fix _should_raise_content_policy_error
* fix(convert_dict_to_response.py): only convert if response is the response_format tool call passed in
Fixes https://github.com/BerriAI/litellm/issues/8241
* fix(gpt_transformation.py): makes sure response format / tools conversion doesn't remove previous tool calls
* refactor(gpt_transformation.py): refactor out json schema converstion to base config
keeps logic consistent across providers
* fix(o_series_transformation.py): support o3 mini native streaming
Fixes https://github.com/BerriAI/litellm/issues/8274
* fix(gpt_transformation.py): remove unused variables
* test: update test
* refactor(deepseek/): move deepseek to base llm http handler
Fixes https://github.com/BerriAI/litellm/issues/8128#issuecomment-2635430457
* fix(gpt_transformation.py): support stream parsing for gpt-like calls
* test(test_deepseek_completion.py): add async streaming test
* fix(gpt_transformation.py): fix import
* fix(gpt_transformation.py): return full api base and content type
* fix(ui_sso.py): use common `get_user_object` logic across jwt + ui sso auth
Allows finding users by their email, and attaching the sso user id to the user if found
* Improve Team Management flow on UI (#8204)
* build(teams.tsx): refactor teams page to make it easier to add members to a team
make a row in table clickable -> allows user to add users to team they intended
* build(teams.tsx): make it clear user should click on team id to view team details
simplifies team management by putting team details on separate page
* build(team_info.tsx): separately show user id and user email
make it easy for user to understand the information they're seeing
* build(team_info.tsx): add back in 'add member' button
* build(team_info.tsx): working team member update on team_info.tsx
* build(team_info.tsx): enable team member delete on ui
allow user to delete accidental adds
* build(internal_user_endpoints.py): expose new endpoint for ui to allow filtering on user table
allows proxy admin to quickly find user they're looking for
* feat(team_endpoints.py): expose new team filter endpoint for ui
allows proxy admin to easily find team they're looking for
* feat(user_search_modal.tsx): allow admin to filter on users when adding new user to teams
* test: mark flaky test
* test: mark flaky test
* fix(exception_mapping_utils.py): fix anthropic text route error
* fix(ui_sso.py): handle situation when user not in db
* fix(o_series_transformation.py): add 'reasoning_effort' as o series model param
Closes https://github.com/BerriAI/litellm/issues/8182
* fix(main.py): ensure `reasoning_effort` is a mapped openai param
* refactor(azure/): rename o1_[x] files to o_series_[x]
* refactor(base_llm_unit_tests.py): refactor testing for o series reasoning effort
* test(test_azure_o_series.py): have azure o series tests correctly inherit from base o series model tests
* feat(base_utils.py): support translating 'developer' role to 'system' role for non-openai providers
Makes it easy to switch from openai to anthropic
* fix: fix linting errors
* fix(base_llm_unit_tests.py): fix test
* fix(main.py): add missing param
* fix: support azure o3 model family for fake streaming workaround (#8162)
* fix: support azure o3 model family for fake streaming workaround
* refactor: rename helper to is_o_series_model for clarity
* update function calling parameters for o3 models (#8178)
* refactor(o1_transformation.py): refactor o1 config to be o series config, expand o series model check to o3
ensures max_tokens is correctly translated for o3
* feat(openai/): refactor o1 files to be 'o_series' files
expands naming to cover o3
* fix(azure/chat/o1_handler.py): azure openai is an instance of openai - was causing resets
* test(test_azure_o_series.py): assert stream faked for azure o3 mini
Resolves https://github.com/BerriAI/litellm/pull/8162
* fix(o1_transformation.py): fix o1 transformation logic to handle explicit o1_series routing
* docs(azure.md): update doc with `o_series/` model name
---------
Co-authored-by: byrongrogan <47910641+byrongrogan@users.noreply.github.com>
Co-authored-by: Low Jian Sheng <15527690+lowjiansheng@users.noreply.github.com>
* Add O3-Mini for Azure and Remove Vision Support (#8161)
* Azure Released O3-mini at the same time as OAI, so i've added support here. Confirmed to work with Sweden Central.
* [FIX] replace cgi for python 3.13 with email.Message as suggested in PEP 594 (#8160)
* Update model_prices_and_context_window.json (#8120)
codestral2501 pricing on vertex_ai
* Fix/db view names (#8119)
* Fix to case sensitive DB Views name
* Fix to case sensitive DB View names
* Added quotes to check query as well
* Added quotes to create view query
* test: handle server error for flaky test
vertex ai has unstable endpoints
---------
Co-authored-by: Wanis Elabbar <70503629+elabbarw@users.noreply.github.com>
Co-authored-by: Honghua Dong <dhh1995@163.com>
Co-authored-by: superpoussin22 <vincent.nadal@orange.fr>
Co-authored-by: Miguel Armenta <37154380+ma-armenta@users.noreply.github.com>
* Litellm dev 01 29 2025 p4 (#8107)
* fix(key_management_endpoints.py): always get db team
Fixes https://github.com/BerriAI/litellm/issues/7983
* test(test_key_management.py): add unit test enforcing check_db_only is always true on key generate checks
* test: fix test
* test: skip gemini thinking
* Litellm dev 01 29 2025 p3 (#8106)
* fix(__init__.py): reduces size of __init__.py and reduces scope for errors by using correct param
* refactor(__init__.py): refactor init by cleaning up redundant params
* refactor(__init__.py): move more constants into constants.py
cleanup root
* refactor(__init__.py): more cleanup
* feat(__init__.py): expose new 'disable_hf_tokenizer_download' param
enables hf model usage in offline env
* docs(config_settings.md): document new disable_hf_tokenizer_download param
* fix: fix linting error
* fix: fix unsafe comparison
* test: fix test
* docs(public_teams.md): add doc showing how to expose public teams for users to join
* docs: add beta disclaimer on public teams
* test: update tests
* feat(lowest_tpm_rpm_v2.py): fix redis cache check to use >= instead of >
makes it consistent
* test(test_custom_guardrails.py): add more unit testing on default on guardrails
ensure it runs if user sent guardrail list is empty
* docs(quick_start.md): clarify default on guardrails run even if user guardrails list contains other guardrails
* refactor(litellm_logging.py): refactor no-log to helper util
allows for more consistent behavior
* feat(litellm_logging.py): add event hook to verbose logs
* fix(litellm_logging.py): add unit testing to ensure `litellm.disable_no_log_param` is respected
* docs(logging.md): document how to disable 'no-log' param
* test: fix test to handle feb
* test: cleanup old bedrock model
* fix: fix router check