* fix(transformation.py): support a 'format' parameter for image's
allow user to specify mime type
* fix: pass mimetype via 'format' param
* feat(gemini/chat/transformation.py): support 'format' param for gemini
* fix(factory.py): support 'format' param on sync bedrock converse calls
* feat(bedrock/converse_transformation.py): support 'format' param for bedrock async calls
* refactor(factory.py): move to supporting 'format' param in base helper
ensures consistency in param support
* feat(gpt_transformation.py): filter out 'format' param
don't send invalid param to openai
* fix(gpt_transformation.py): fix translation
* fix: fix translation error
* fix(core_helpers.py): handle litellm_metadata instead of 'metadata'
* feat(batches/): ensure batches logs are written to db
makes batches response dict compatible
* fix(cost_calculator.py): handle batch response being a dictionary
* fix(batches/main.py): modify retrieve endpoints to use @client decorator
enables logging to work on retrieve call
* fix(batches/main.py): fix retrieve batch response type to be 'dict' compatible
* fix(spend_tracking_utils.py): send unique uuid for retrieve batch call type
create batch and retrieve batch share the same id
* fix(spend_tracking_utils.py): prevent duplicate retrieve batch calls from being double counted
* refactor(batches/): refactor cost tracking for batches - do it on retrieve, and within the established litellm_logging pipeline
ensures cost is always logged to db
* fix: fix linting errors
* fix: fix linting error
* fix(common_utils.py): handle $id in response schema when calling vertex ai
Fixes issue where `$id` present in response_schema was not accepted by vertex ai
* test(test_vertex.py): add unit test to ensure $id stripped out of vertex schema
* fix(utils.py): fix vertex ai optional param handling
don't pass max retries to unsupported route
Fixes https://github.com/BerriAI/litellm/issues/8254
* fix(get_supported_openai_params.py): fix linting error
* fix(get_supported_openai_params.py): default to openai-like spec
* test: fix test
* fix: fix linting error
* Improved wildcard route handling on `/models` and `/model_group/info` (#8473)
* fix(model_checks.py): update returning known model from wildcard to filter based on given model prefix
ensures wildcard route - `vertex_ai/gemini-*` just returns known vertex_ai/gemini- models
* test(test_proxy_utils.py): add unit testing for new 'get_known_models_from_wildcard' helper
* test(test_models.py): add e2e testing for `/model_group/info` endpoint
* feat(prometheus.py): support tracking total requests by user_email on prometheus
adds initial support for tracking total requests by user_email
* test(test_prometheus.py): add testing to ensure user email is always tracked
* test: update testing for new prometheus metric
* test(test_prometheus_unit_tests.py): add user email to total proxy metric
* test: update tests
* test: fix spend tests
* test: fix test
* fix(pagerduty.py): fix linting error
* (Bug fix) - Using `include_usage` for /completions requests + unit testing (#8484)
* pass stream options (#8419)
* test_completion_streaming_usage_metrics
* test_text_completion_include_usage
---------
Co-authored-by: Kaushik Deka <55996465+Kaushikdkrikhanu@users.noreply.github.com>
* fix naming docker stable release
* build(model_prices_and_context_window.json): handle azure model update
* docs(token_auth.md): clarify scopes can be a list or comma separated string
* docs: fix docs
* add sonar pricings (#8476)
* add sonar pricings
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window_backup.json
* update load testing script
* fix test_async_router_context_window_fallback
* pplx - fix supports tool choice openai param (#8496)
* fix prom check startup (#8492)
* test_async_router_context_window_fallback
* ci(config.yml): mark daily docker builds with `-nightly` (#8499)
Resolves https://github.com/BerriAI/litellm/discussions/8495
* (Redis Cluster) - Fixes for using redis cluster + pipeline (#8442)
* update RedisCluster creation
* update RedisClusterCache
* add redis ClusterCache
* update async_set_cache_pipeline
* cleanup redis cluster usage
* fix redis pipeline
* test_init_async_client_returns_same_instance
* fix redis cluster
* update mypy_path
* fix init_redis_cluster
* remove stub
* test redis commit
* ClusterPipeline
* fix import
* RedisCluster import
* fix redis cluster
* Potential fix for code scanning alert no. 2129: Clear-text logging of sensitive information
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* fix naming of redis cluster integration
* test_redis_caching_ttl_pipeline
* fix async_set_cache_pipeline
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* Litellm UI stable version 02 12 2025 (#8497)
* fix(key_management_endpoints.py): fix `/key/list` to include `return_full_object` as a top-level query param
Allows user to specify they want the keys as a list of objects
* refactor(key_list.tsx): initial refactor of key table in user dashboard
offloads key filtering logic to backend api
prevents common error of user not being able to see their keys
* fix(key_management_endpoints.py): allow internal user to query `/key/list` to see their keys
* fix(key_management_endpoints.py): add validation checks and filtering to `/key/list` endpoint
allow internal user to see their keys. not anybody else's
* fix(view_key_table.tsx): fix issue where internal user could not see default team keys
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* fix: fix linting error
* test_supports_tool_choice
* test_async_router_context_window_fallback
* fix: fix test (#8501)
* Litellm dev 02 12 2025 p1 (#8494)
* Resolves https://github.com/BerriAI/litellm/issues/6625 (#8459)
- enables no auth for SMTP
Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
* add sonar pricings (#8476)
* add sonar pricings
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window.json
* Update model_prices_and_context_window_backup.json
* test: fix test
---------
Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
Co-authored-by: Dani Regli <1daniregli@gmail.com>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
* test: fix test
* UI Fixes p2 (#8502)
* refactor(admin.tsx): cleanup add new admin flow
removes buggy flow. Ensures just 1 simple way to add users / update roles.
* fix(user_search_modal.tsx): ensure 'add member' button is always visible
* fix(edit_membership.tsx): ensure 'save changes' button always visible
* fix(internal_user_endpoints.py): ensure user in org can be deleted
Fixes issue where user couldn't be deleted if they were a member of an org
* fix: fix linting error
* add phoenix docs for observability integration (#8522)
* Add files via upload
* Update arize_integration.md
* Update arize_integration.md
* add Phoenix docs
* Added custom_attributes to additional_keys which can be sent to athina (#8518)
* (UI) fix log details page (#8524)
* rollback changes to view logs page
* ui new build
* add interface for prefetch
* fix spread operation
* fix max size for request view page
* clean up table
* ui fix column on request logs page
* ui new build
* Add UI Support for Admins to Call /cache/ping and View Cache Analytics (#8475) (#8519)
* [Bug] UI: Newly created key does not display on the View Key Page (#8039)
- Fixed issue where all keys appeared blank for admin users.
- Implemented filtering of data via team settings to ensure all keys are displayed correctly.
* Fix:
- Updated the validator to allow model editing when `keyTeam.team_alias === "Default Team"`.
- Ensured other teams still follow the original validation rules.
* - added some classes in global.css
- added text wrap in output of request,response and metadata in index.tsx
- fixed styles of table in table.tsx
* - added full payload when we open single log entry
- added Combined Info Card in index.tsx
* fix: keys not showing on refresh for internal user
* merge
* main merge
* cache page
* ca remove
* terms change
* fix:places caching inside exp
---------
Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Kaushik Deka <55996465+Kaushikdkrikhanu@users.noreply.github.com>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: Dani Regli <1daniregli@gmail.com>
Co-authored-by: exiao <exiao@users.noreply.github.com>
Co-authored-by: vivek-athina <153479827+vivek-athina@users.noreply.github.com>
Co-authored-by: Taha Ali <123803932+tahaali-dev@users.noreply.github.com>
* feat(pass_through_endpoints.py): fix anthropic end user cost tracking
* fix(anthropic/chat/transformation.py): use returned provider model for anthropic
handles anthropic `-latest` tag in request body throwing cost calculation errors
ensures we can be accurate in our model cost tracking
* feat(model_prices_and_context_window.json): add gemini-2.0-flash-thinking-exp pricing
* test: update test to use assumption that user_api_key_dict can get anthropic user id
* test: fix test
* fix: fix test
* fix(anthropic_pass_through.py): uncomment previous anthropic end-user cost tracking code block
can't guarantee user api key dict always has end user id - too many code paths
* fix(user_api_key_auth.py): this allows end user id from request body to always be read and set in auth object
* fix(auth_check.py): fix linting error
* test: fix auth check
* fix(auth_utils.py): fix get end user id to handle metadata = None
* fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini process url
* refactor(router.py): refactor '_prompt_management_factory' to use logging obj get_chat_completion logic
deduplicates code
* fix(litellm_logging.py): update 'get_chat_completion_prompt' to update logging object messages
* docs(prompt_management.md): update prompt management to be in beta
given feedback - this still needs to be revised (e.g. passing in user message, not ignoring)
* refactor(prompt_management_base.py): introduce base class for prompt management
allows consistent behaviour across prompt management integrations
* feat(prompt_management_base.py): support adding client message to template message + refactor langfuse prompt management to use prompt management base
* fix(litellm_logging.py): log prompt id + prompt variables to langfuse if set
allows tracking what prompt was used for what purpose
* feat(litellm_logging.py): log prompt management metadata in standard logging payload + use in langfuse
allows logging prompt id / prompt variables to langfuse
* test: fix test
* fix(router.py): cleanup unused imports
* fix: fix linting error
* fix: fix trace param typing
* fix: fix linting errors
* fix: fix code qa check
* test(azure_openai_o1.py): initial commit with testing for azure openai o1 preview model
* fix(base_llm_unit_tests.py): handle azure o1 preview response format tests
skip as o1 on azure doesn't support tool calling yet
* fix: initial commit of azure o1 handler using openai caller
simplifies calling + allows fake streaming logic alr. implemented for openai to just work
* feat(azure/o1_handler.py): fake o1 streaming for azure o1 models
azure does not currently support streaming for o1
* feat(o1_transformation.py): support overriding 'should_fake_stream' on azure/o1 via 'supports_native_streaming' param on model info
enables user to toggle on when azure allows o1 streaming without needing to bump versions
* style(router.py): remove 'give feedback/get help' messaging when router is used
Prevents noisy messaging
Closes https://github.com/BerriAI/litellm/issues/5942
* fix(types/utils.py): handle none logprobs
Fixes https://github.com/BerriAI/litellm/issues/328
* fix(exception_mapping_utils.py): fix error str unbound error
* refactor(azure_ai/): move to openai_like chat completion handler
allows for easy swapping of api base url's (e.g. ai.services.com)
Fixes https://github.com/BerriAI/litellm/issues/7275
* refactor(azure_ai/): move to base llm http handler
* fix(azure_ai/): handle differing api endpoints
* fix(azure_ai/): make sure all unit tests are passing
* fix: fix linting errors
* fix: fix linting errors
* fix: fix linting error
* fix: fix linting errors
* fix(azure_ai/transformation.py): handle extra body param
* fix(azure_ai/transformation.py): fix max retries param handling
* fix: fix test
* test(test_azure_o1.py): fix test
* fix(llm_http_handler.py): support handling azure ai unprocessable entity error
* fix(llm_http_handler.py): handle sync invalid param error for azure ai
* fix(azure_ai/): streaming support with base_llm_http_handler
* fix(llm_http_handler.py): working sync stream calls with unprocessable entity handling for azure ai
* fix: fix linting errors
* fix(llm_http_handler.py): fix linting error
* fix(azure_ai/): handle cohere tool call invalid index param error
* refactor(utils.py): migrate amazon titan config to base config
* refactor(utils.py): refactor bedrock meta invoke model translation to use base config
* refactor(utils.py): move bedrock ai21 to base config
* refactor(utils.py): move bedrock cohere to base config
* refactor(utils.py): move bedrock mistral to use base config
* refactor(utils.py): move all provider optional param translations to using a config
* docs(clientside_auth.md): clarify how to pass vertex region to litellm proxy
* fix(utils.py): handle scenario where custom llm provider is none / empty
* fix: fix get config
* test(test_otel_load_tests.py): widen perf margin
* fix(utils.py): fix get provider config check to handle custom llm's
* fix(utils.py): fix check
* build(model_prices_and_context_window.json): add gemini-1.5-flash context caching
* fix(context_caching/transformation.py): just use last identified cache point
Fixes https://github.com/BerriAI/litellm/issues/6738
* fix(context_caching/transformation.py): pick first contiguous block - handles system message error from google
Fixes https://github.com/BerriAI/litellm/issues/6738
* fix(vertex_ai/gemini/): track context caching tokens
* refactor(gemini/): place transformation.py inside `chat/` folder
make it easy for user to know we support the equivalent endpoint
* fix: fix import
* refactor(vertex_ai/): move vertex_ai cost calc inside vertex_ai/ folder
make it easier to see cost calculation logic
* fix: fix linting errors
* fix: fix circular import
* feat(gemini/cost_calculator.py): support gemini context caching cost calculation
generifies anthropic's cost calculation function and uses it across anthropic + gemini
* build(model_prices_and_context_window.json): add cost tracking for gemini-1.5-flash-002 w/ context caching
Closes https://github.com/BerriAI/litellm/issues/6891
* docs(gemini.md): add gemini context caching architecture diagram
make it easier for user to understand how context caching works
* docs(gemini.md): link to relevant gemini context caching code
* docs(gemini/context_caching): add readme in github, make it easy for dev to know context caching is supported + where to go for code
* fix(llm_cost_calc/utils.py): handle gemini 128k token diff cost calc scenario
* fix(deepseek/cost_calculator.py): support deepseek context caching cost calculation
* test: fix test
* fix(factory.py): skip empty text blocks for bedrock user messages
Fixes https://github.com/BerriAI/litellm/issues/7169
* Add support for Gemini 2.0 GoogleSearch tool (#7257)
* Add support for google_search tool in gemini 2.0
* Add/modify tests
* Fix grounding check
* Remove 2.0 grounding test; exclude experimental model in VERTEX_MODELS_TO_NOT_TEST
* Swap order of tools
* DFix formatting
* fix(get_api_base.py): return api base in streaming response
Fixes https://github.com/BerriAI/litellm/issues/7249
Closes https://github.com/BerriAI/litellm/pull/7250
* fix(cost_calculator.py): only set base model to model if not none
Fixes https://github.com/BerriAI/litellm/issues/7223
* fix(cost_calculator.py): enforce stricter order when picking model for cost calculation
* fix(cost_calculator.py): fix '_select_model_name_for_cost_calc' to return model name with region name prefix if provided
* fix(utils.py): fix 'get_model_info()' to handle edge case where model name starts with custom llm provider AND custom llm provider is given
* fix(cost_calculator.py): handle `custom_llm_provider-` scenario
* fix(cost_calculator.py): e2e working tts cost tracking
ensures initial message is passed in, to cost calculator
* fix(factory.py): suppress linting errors
* fix(cost_calculator.py): strip llm provider from model name after selecting cost calc model
* fix(litellm_logging.py): store initial request in 'input' field + accept base_model to be passed in litellm_params directly
* test: handle none env var value in flaky test
* fix(litellm_logging.py): fix linting errors
---------
Co-authored-by: Sam B <samlingx@gmail.com>