* fix(utils.py): handle failed hf tokenizer request during calls
prevents proxy from failing due to bad hf tokenizer calls
* fix(utils.py): convert failure callback str to custom logger class
Fixes https://github.com/BerriAI/litellm/issues/8013
* test(test_utils.py): fix test - avoid adding mlflow dep on ci/cd
* fix: add missing env vars to test
* test: cleanup redundant test
* feat(handle_jwt.py): initial commit adding custom RBAC support on jwt auth
allows admin to define user role field and allowed roles which map to 'internal_user' on litellm
* fix(auth_checks.py): ensure user allowed to access model, when calling via personal keys
Fixes https://github.com/BerriAI/litellm/issues/8029
* feat(handle_jwt.py): support role based access with model permission control on proxy
Allows admin to just grant users roles on IDP (e.g. Azure AD/Keycloak) and user can immediately start calling models
* docs(rbac): add docs on rbac for model access control
make it clear how admin can use roles to control model access on proxy
* fix: fix linting errors
* test(test_user_api_key_auth.py): add unit testing to ensure rbac role is correctly enforced
* test(test_user_api_key_auth.py): add more testing
* test(test_users.py): add unit testing to ensure user model access is always checked for new keys
Resolves https://github.com/BerriAI/litellm/issues/8029
* test: fix unit test
* fix(dot_notation_indexing.py): fix typing to work with python 3.8
* feat(main.py): use asyncio.sleep for mock_Timeout=true on async request
adds unit testing to ensure proxy does not fail if specific Openai requests hang (e.g. recent o1 outage)
* fix(streaming_handler.py): fix deepseek r1 return reasoning content on streaming
Fixes https://github.com/BerriAI/litellm/issues/7942
* Revert "fix(streaming_handler.py): fix deepseek r1 return reasoning content on streaming"
This reverts commit 7a052a64e3.
* fix(deepseek-r-1): return reasoning_content as a top-level param
ensures compatibility with existing tools that use it
* fix: fix linting error
* fix(utils.py): initial commit fixing custom cost tracking
refactors out provider specific model info from `get_model_info` - this was causing custom costs to be registered incorrectly
* fix(utils.py): cleanup `_supports_factory` to check provider info, if model info is None
some providers support features like vision across all models
* fix(utils.py): refactor to use _supports_factory
* test: update testing
* fix: fix linting errors
* test: fix testing
* fix(base_utils.py): supported nested json schema passed in for anthropic calls
* refactor(base_utils.py): refactor ref parsing to prevent infinite loop
* test(test_openai_endpoints.py): refactor anthropic test to use bedrock
* fix(langfuse_prompt_management.py): add unit test for sync langfuse calls
Resolves https://github.com/BerriAI/litellm/issues/7938#issuecomment-2613293757
* _add_guardrails_from_key_or_team_metadata
* e2e test test_guardrails_with_team_controls
* add try/except on team new
* test_guardrails_with_team_controls
* test_guardrails_with_api_key_controls
* test(test_completion_cost.py): add sdk test to ensure base model is used for cost tracking
* test(test_completion_cost.py): add sdk test to ensure custom pricing works
* fix(main.py): add base model cost tracking support for embedding calls
Enables base model cost tracking for embedding calls when base model set as a litellm_param
* fix(litellm_logging.py): update logging object with litellm params - including base model, if given
ensures base model param is always tracked
* fix(main.py): fix linting errors
* fix(http_handler.py): support passing ssl verify dynamically and using the correct httpx client based on passed ssl verify param
Fixes https://github.com/BerriAI/litellm/issues/6499
* feat(llm_http_handler.py): support passing `ssl_verify=False` dynamically in call args
Closes https://github.com/BerriAI/litellm/issues/6499
* fix(proxy/utils.py): prevent bad logs from breaking all cost tracking + reset list regardless of success/failure
prevents malformed logs from causing all spend tracking to break since they're constantly retried
* test(test_proxy_utils.py): add test to ensure bad log is dropped
* test(test_proxy_utils.py): ensure in-memory spend logs reset after bad log error
* test(test_user_api_key_auth.py): add unit test to ensure end user id as str works
* fix(auth_utils.py): ensure extracted end user id is always a str
prevents db cost tracking errors
* test(test_auth_utils.py): ensure get end user id from request body always returns a string
* test: update tests
* test: skip bedrock test- behaviour now supported
* test: fix testing
* refactor(spend_tracking_utils.py): reduce size of get_logging_payload
* test: fix test
* bump: version 1.59.4 → 1.59.5
* Revert "bump: version 1.59.4 → 1.59.5"
This reverts commit 1182b46b2e.
* fix(utils.py): fix spend logs retry logic
* fix(spend_tracking_utils.py): fix get tags
* fix(spend_tracking_utils.py): fix end user id spend tracking on pass-through endpoints
* fix(bedrock/converse_handler.py): fix bedrock region name on async calls
* fix(utils.py): fix split model handling
Fixes bedrock cost calculation when region name is given
* feat(_health_endpoints.py): support health checking datadog integration
Closes https://github.com/BerriAI/litellm/issues/7921
* feat(router.py): add retry headers to response
makes it easy to add testing to ensure model-specific retries are respected
* fix(add_retry_headers.py): clarify attempted retries vs. max retries
* test(test_fallbacks.py): add test for checking if max retries set for model is respected
* test(test_fallbacks.py): assert values for attempted retries and max retries are as expected
* fix(utils.py): return timeout in litellm proxy response headers
* test(test_fallbacks.py): add test to assert model specific timeout used on timeout error
* test: add bad model with timeout to proxy
* fix: fix linting error
* fix(router.py): fix get model list from model alias
* test: loosen test restriction - account for other events on proxy
* feat(main.py): add new 'provider_specific_header' param
allows passing extra header for specific provider
* fix(litellm_pre_call_utils.py): add unit test for pre call utils
* test(test_bedrock_completion.py): skip test now that bedrock supports this
* fix(utils.py): move adding custom logger callback to success event into separate function + don't add success callback to failure event
if user is explicitly choosing 'success' callback, don't log failure as well
* test(test_utils.py): add unit test to ensure custom logger callback only adds callback to specific event
* fix(utils.py): remove string from list of callbacks once corresponding callback class is added
prevents floating values - simplifies testing
* fix(utils.py): fix linting error
* test: cleanup args before test
* test: fix test
* test: update test
* test: fix test
* docs(docker_quick_start.md): add more troubleshooting guides
* test(test_fallbacks.py): add e2e test for proxy with fallbacks + custom fallback message
* test(test_bedrock_completion.py): skip test now that bedrock supports this behaviour
* test(test_fireworks_ai_translation.py): mock fireworks ai test
* fix(types/utils.py): support returning 'reasoning_content' for deepseek models
Fixes https://github.com/BerriAI/litellm/issues/7877#issuecomment-2603813218
* fix(convert_dict_to_response.py): return deepseek response in provider_specific_field
allows for separating openai vs. non-openai params in model response
* fix(utils.py): support 'provider_specific_field' in delta chunk as well
allows deepseek reasoning content chunk to be returned to user from stream as well
Fixes https://github.com/BerriAI/litellm/issues/7877#issuecomment-2603813218
* fix(watsonx/chat/handler.py): fix passing space id to watsonx on chat route
* fix(watsonx/): fix watsonx_text/ route with space id
* fix(watsonx/): qa item - also adds better unit testing for watsonx embedding calls
* fix(utils.py): rename to '..fields'
* fix: fix linting errors
* fix(utils.py): fix typing - don't show provider-specific field if none or empty - prevents default respons
e from being non-oai compatible
* fix: cleanup unused imports
* docs(deepseek.md): add docs for deepseek reasoning model
* fix(utils.py): don't pass 'anthropic-beta' header to vertex - will cause request to fail
* fix(utils.py): add flag to allow user to disable filtering invalid headers
ensure user can control behaviour
* style(utils.py): cleanup message
* test(test_utils.py): add unit test to cover invalid header filtering
* fix(proxy_server.py): fix custom openapi schema generation
* fix(utils.py): pass extra headers if set
* fix(main.py): fix image variation to use 'client' param