* feat(main.py): add new 'provider_specific_header' param
allows passing extra header for specific provider
* fix(litellm_pre_call_utils.py): add unit test for pre call utils
* test(test_bedrock_completion.py): skip test now that bedrock supports this
* fix(utils.py): move adding custom logger callback to success event into separate function + don't add success callback to failure event
if user is explicitly choosing 'success' callback, don't log failure as well
* test(test_utils.py): add unit test to ensure custom logger callback only adds callback to specific event
* fix(utils.py): remove string from list of callbacks once corresponding callback class is added
prevents floating values - simplifies testing
* fix(utils.py): fix linting error
* test: cleanup args before test
* test: fix test
* test: update test
* test: fix test
* fix(types/utils.py): support returning 'reasoning_content' for deepseek models
Fixes https://github.com/BerriAI/litellm/issues/7877#issuecomment-2603813218
* fix(convert_dict_to_response.py): return deepseek response in provider_specific_field
allows for separating openai vs. non-openai params in model response
* fix(utils.py): support 'provider_specific_field' in delta chunk as well
allows deepseek reasoning content chunk to be returned to user from stream as well
Fixes https://github.com/BerriAI/litellm/issues/7877#issuecomment-2603813218
* fix(watsonx/chat/handler.py): fix passing space id to watsonx on chat route
* fix(watsonx/): fix watsonx_text/ route with space id
* fix(watsonx/): qa item - also adds better unit testing for watsonx embedding calls
* fix(utils.py): rename to '..fields'
* fix: fix linting errors
* fix(utils.py): fix typing - don't show provider-specific field if none or empty - prevents default respons
e from being non-oai compatible
* fix: cleanup unused imports
* docs(deepseek.md): add docs for deepseek reasoning model
* fix(utils.py): don't pass 'anthropic-beta' header to vertex - will cause request to fail
* fix(utils.py): add flag to allow user to disable filtering invalid headers
ensure user can control behaviour
* style(utils.py): cleanup message
* test(test_utils.py): add unit test to cover invalid header filtering
* fix(proxy_server.py): fix custom openapi schema generation
* fix(utils.py): pass extra headers if set
* fix(main.py): fix image variation to use 'client' param
* fix(proxy_server.py): fix get model info when litellm_model_id is set
Fixes https://github.com/BerriAI/litellm/issues/7873
* test(test_models.py): add test to ensure get model info on specific deployment has same value as all model info
Fixes https://github.com/BerriAI/litellm/issues/7873
* fix(usage.tsx): make model analytics free
Fixes @iqballx's feedback
* fix(fix(invoke_handler.py):-fix-bedrock-error-chunk-parsing): return correct bedrock status code and error message if chunk in stream
Improves bedrock stream error handling
* fix(proxy_server.py): fix linting errors
* test(test_auth_checks.py): remove redundant test
* fix(proxy_server.py): fix linting errors
* test: fix flaky test
* test: fix test
* fix(router.py): pass stream timeout correctly for non openai / azure models
Fixes https://github.com/BerriAI/litellm/issues/7870
* test(test_router_timeout.py): add test for streaming
* test(test_router_timeout.py): add unit testing for new router functions
* docs(ollama.md): link to section on calling ollama within docker container
* test: remove redundant test
* test: fix test to include timeout value
* docs(config_settings.md): document new router settings param
* fix(initial-test-to-return-api-timeout-value-in-openai-timeout-exception): Makes it easier for user to debug why request timed out
* feat(openai.py): return timeout value + time taken on openai timeout errors
helps debug timeout errors
* fix(utils.py): fix num retries extraction logic when num_retries = 0
* fix(config_settings.md): litellm_logging.py
support printing payload to console if 'LITELLM_PRINT_STANDARD_LOGGING_PAYLOAD' is true
Enables easier debug
* test(test_auth_checks.py'): remove common checks userapikeyauth enforcement check
* fix(litellm_logging.py): fix linting error
* fix(user_api_key_auth.py): handle clientside fallback model when item in list is dictionary
* fix(auth_checks.py): help user find invalid model names during dev
Ensure fallbacks work in prod
* fix(user_api_key_auth.py): fix linting check
* fix: cleanup unused variables
* fix: fix import
* fix(auth_checks.py): fix auth check
* feat(health_check.py): set upperbound for api when making health check call
prevent bad model from health check to hang and cause pod restarts
* fix(health_check.py): cleanup task once completed
* fix(constants.py): bump default health check timeout to 1min
* docs(health.md): add 'health_check_timeout' to health docs on litellm
* build(proxy_server_config.yaml): add bad model to health check
* fix(lm_studio/chat/transformation.py): Fix https://github.com/BerriAI/litellm/issues/7811
* fix(router.py): fix mock timeout check
* fix: drop model name from fallback args since it causes a conflict with the model=model that is provided later on. (#7806)
This error happens if you provide multiple fallback models to the completion function with model name defined in each one.
* fix(router.py): remove mock_timeout before sending to request
prevents reuse in fallbacks
* test: update test
* test: revert test change - wrong pr
---------
Co-authored-by: Dudu Lasry <david1542@users.noreply.github.com>
* refactor: initial commit for using separate sync vs. async transformation routes for bedrock
ensures no blocking calls e.g. when converting image url to b64
* perf(converse_transformation.py): make bedrock converse transformation async
asyncify's the bedrock message transformation - useful for handling image urls for bedrock
* fix(converse_handler.py): fix logging for async streaming
* style: cleanup unused imports
* build: ensure all regional bedrock models have same supported values as base bedrock model
prevents drift
* test(base_llm_unit_tests.py): add testing for nested pydantic objects
* fix(test_utils.py): add test_get_potential_model_names
* fix(anthropic/chat/transformation.py): support nested pydantic objects
Fixes https://github.com/BerriAI/litellm/issues/7755
* test: initial test to enforce all functions in user_api_key_auth.py have direct testing
* test(test_user_api_key_auth.py): add is_allowed_route unit test
* test(test_user_api_key_auth.py): add more tests
* test(test_user_api_key_auth.py): add complete testing coverage for all functions in `user_api_key_auth.py`
* test(test_db_schema_changes.py): add a unit test to ensure all db schema changes are backwards compatible
gives user an easy rollback path
* test: fix schema compatibility test filepath
* test: fix test
* feat(pass_through_endpoints.py): fix anthropic end user cost tracking
* fix(anthropic/chat/transformation.py): use returned provider model for anthropic
handles anthropic `-latest` tag in request body throwing cost calculation errors
ensures we can be accurate in our model cost tracking
* feat(model_prices_and_context_window.json): add gemini-2.0-flash-thinking-exp pricing
* test: update test to use assumption that user_api_key_dict can get anthropic user id
* test: fix test
* fix: fix test
* fix(anthropic_pass_through.py): uncomment previous anthropic end-user cost tracking code block
can't guarantee user api key dict always has end user id - too many code paths
* fix(user_api_key_auth.py): this allows end user id from request body to always be read and set in auth object
* fix(auth_check.py): fix linting error
* test: fix auth check
* fix(auth_utils.py): fix get end user id to handle metadata = None
* feat(main.py): initial commit for `/image/variations` endpoint support
* refactor(base_llm/): introduce new base llm base config for image variation endpoints
* refactor(openai/image_variations/transformation.py): implement openai image variation transformation handler
* fix: test
* feat(openai/): working openai `/image/variation` endpoint calls via sdk
* feat(topaz/): topaz sync image variation call support
Addresses https://github.com/BerriAI/litellm/issues/7593
'
* fix(topaz/transformation.py): fix linting errors
* fix(openai/image_variations/handler.py): fix passing json data
* fix(main.py): image_variation/
support async image variation route - `aimage_variation`
* fix(test_get_model_info.py): fix test
* fix: cleanup unused imports
* feat(openai/): add async `/image/variations` endpoint support
* feat(topaz/): support async `/image/variations` calls
* fix: test
* fix(utils.py): fix get_model_info_helper for no model info w/ provider config
handles situation where model info is not known but provider config exists
* test(test_router_fallbacks.py): mark flaky test
* fix: fix unused imports
* test: bump otel load test perf threshold - accounts for current load tests hitting same server
* fix(__init__.py): fix init to exclude pricing-only model cost values from real model names
prevents bad health checks on wildcard routes
* fix(get_llm_provider.py): fix to handle calling bedrock_converse models
* feat(langfuse.py): log the used prompt when prompt management used
* test: fix test
* docs(self_serve.md): add doc on restricting personal key creation on ui
* feat(s3.py): support s3 logging with team alias prefixes (if available)
New preview feature
* fix(main.py): remove old if block - simplify to just await if coroutine returned
fixes lm_studio async embedding error
* fix(langfuse.py): handle get prompt check
* test(test_basic_python_version.py): assert all optional dependencies are marked as extras on poetry
Fixes https://github.com/BerriAI/litellm/issues/7677
* docs(secret.md): clarify 'read_and_write' secret manager usage on aws
* docs(secret.md): fix doc
* build(ui/teams.tsx): add edit/delete button for updating user / team membership on ui
allows updating user role to admin on ui
* build(ui/teams.tsx): display edit member component on ui, when edit button on member clicked
* feat(team_endpoints.py): support updating team member role to admin via api endpoints
allows team member to become admin post-add
* build(ui/user_dashboard.tsx): if team admin - show all team keys
Fixes https://github.com/BerriAI/litellm/issues/7650
* test(config.yml): add tomli to ci/cd
* test: don't call python_basic_testing in local testing (covered by python 3.13 testing)
* test(test_get_model_info.py): add unit test confirming router deployment updates global 'get_model_info'
* fix(get_supported_openai_params.py): fix custom llm provider 'get_supported_openai_params'
Fixes https://github.com/BerriAI/litellm/issues/7668
* docs(azure.md): clarify how azure ad token refresh on proxy works
Closes https://github.com/BerriAI/litellm/issues/7665
* fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini process url
* refactor(router.py): refactor '_prompt_management_factory' to use logging obj get_chat_completion logic
deduplicates code
* fix(litellm_logging.py): update 'get_chat_completion_prompt' to update logging object messages
* docs(prompt_management.md): update prompt management to be in beta
given feedback - this still needs to be revised (e.g. passing in user message, not ignoring)
* refactor(prompt_management_base.py): introduce base class for prompt management
allows consistent behaviour across prompt management integrations
* feat(prompt_management_base.py): support adding client message to template message + refactor langfuse prompt management to use prompt management base
* fix(litellm_logging.py): log prompt id + prompt variables to langfuse if set
allows tracking what prompt was used for what purpose
* feat(litellm_logging.py): log prompt management metadata in standard logging payload + use in langfuse
allows logging prompt id / prompt variables to langfuse
* test: fix test
* fix(router.py): cleanup unused imports
* fix: fix linting error
* fix: fix trace param typing
* fix: fix linting errors
* fix: fix code qa check
* fix(main.py): fix lm_studio/ embedding routing
adds the mapping + updates docs with example
* docs(self_serve.md): update doc to show how to auto-add sso users to teams
* fix(streaming_handler.py): simplify async iterator check, to just check if streaming response is an async iterable
* fix(streaming_chunk_builder_utils.py): add test for groq tool calling + streaming + combine chunks
Addresses https://github.com/BerriAI/litellm/issues/7621
* fix(streaming_utils.py): fix modelresponseiterator for openai like chunk parser
ensures chunk parser uses the correct tool call id when translating the chunk
Fixes https://github.com/BerriAI/litellm/issues/7621
* build(model_hub.tsx): display cost pricing on model hub
* build(model_hub.tsx): show cost per token pricing + complete model information
* fix(types/utils.py): fix usage object handling
* feat(cost_calculator.py): add cost tracking ($0) for openai moderations endpoint
removes sentry cost tracking errors caused by this
* build(teams.tsx): allow assigning teams to orgs
* fix(custom_logger.py): expose new 'async_get_chat_completion_prompt' event hook
* fix(custom_logger.py): langfuse_prompt_management.py
remove 'headers' from custom logger 'async_get_chat_completion_prompt' and 'get_chat_completion_prompt' event hooks
* feat(router.py): expose new function for prompt management based routing
* feat(router.py): partial working router prompt factory logic
allows load balanced model to be used for model name w/ langfuse prompt management call
* feat(router.py): fix prompt management with load balanced model group
* feat(langfuse_prompt_management.py): support reading in openai params from langfuse
enables user to define optional params on langfuse vs. client code
* test(test_Router.py): add unit test for router based langfuse prompt management
* fix: fix linting errors