* fix(key_management_endpoints.py): initial commit with logic to get all keys for teams user is an admin for
* fix(key_managements_endpoints.py): return all keys for teams user is an admin for
* fix(key_management_endpoints.py): add query param to ensure user opts into seeing all team keys (not just their own)
* fix(regenerate_key_modal.tsx): fix key regenerate
* fix(proxy_server.py): fix model metrics check on none api base
* test(test_key_generate_prisma.py): remove redundant test
* test(test_proxy_utils.py): add unit test covering new management endpoint helper util
* fix: fix test
* test(test_proxy_server.py): fix test
* fix(model_checks.py): update returning known model from wildcard to filter based on given model prefix
ensures wildcard route - `vertex_ai/gemini-*` just returns known vertex_ai/gemini- models
* test(test_proxy_utils.py): add unit testing for new 'get_known_models_from_wildcard' helper
* test(test_models.py): add e2e testing for `/model_group/info` endpoint
* feat(prometheus.py): support tracking total requests by user_email on prometheus
adds initial support for tracking total requests by user_email
* test(test_prometheus.py): add testing to ensure user email is always tracked
* test: update testing for new prometheus metric
* test(test_prometheus_unit_tests.py): add user email to total proxy metric
* test: update tests
* test: fix spend tests
* test: fix test
* fix(pagerduty.py): fix linting error
* update team info endpoint
* clean up model alias
* fix model alias
* fix model alias card
* clean up naming on docs
* fix model alias card
* fix _model_in_team_aliases
* team alias - fix litellm.model_alias_map
* fix _update_model_if_team_alias_exists
* fix test_aview_spend_per_user
* Test model alias functionality with teams:
* complete e2e test
* test_update_model_if_team_alias_exists
* update team info endpoint
* clean up model alias
* fix model alias
* fix model alias card
* clean up naming on docs
* fix model alias card
* fix _model_in_team_aliases
* fix key_model_access_denied
* test_can_key_call_model_with_aliases
* fix test_aview_spend_per_user
* feat(handle_jwt.py): initial commit to allow scope based model access
* feat(handle_jwt.py): allow model access based on token scopes
allow admin to control model access from IDP
* test(test_jwt.py): add unit testing for scope based model access
* docs(token_auth.md): add scope based model access to docs
* docs(token_auth.md): update docs
* docs(token_auth.md): update docs
* build: add gemini commercial rate limits
* fix: fix linting error
* feat(proxy/_types.py): add new jwt field params
allows users + services to auth into proxy
* feat(handle_jwt.py): allow team role proxy access
allows proxy admin to set allowed team roles
* fix(proxy/_types.py): add 'routes' to role based permissions
allow proxy admin to restrict what routes a team can access easily
* feat(handle_jwt.py): support more flexible role based route access
v2 on role based 'allowed_routes'
* test(test_jwt.py): add unit test for rbac for proxy routes
* feat(handle_jwt.py): ensure cost tracking always works for any jwt request with `enforce_rbac=True`
* docs(token_auth.md): add documentation on controlling model access via OIDC Roles
* test: increase time delay before retrying
* test: handle model overloaded for test
* fix(key_management_endpoints.py): fix vulnerability where a user could update another user's keys
Resolves https://github.com/BerriAI/litellm/issues/8031
* test(key_management_endpoints.py): return consistent 403 forbidden error when modifying key that doesn't belong to user
* fix(internal_user_endpoints.py): return model max budget in internal user create response
Fixes https://github.com/BerriAI/litellm/issues/7047
* test: fix test
* test: update test to handle gemini token counter change
* fix(factory.py): fix bedrock http:// handling
* docs: fix typo in lm_studio.md (#8222)
* test: fix testing
* test: fix test
---------
Co-authored-by: foreign-sub <51928805+foreign-sub@users.noreply.github.com>
* test: add more unit testing for team member add
* fix(team_endpoints.py): add validation check to prevent same user from being added to team again
prevents duplicates
* fix(team_endpoints.py): raise error if `/team/member_delete` called on member that's not in team
prevent being able to call delete on same member multiple times
* test: update initial tests
* test: fix test
* test: update test to handle no member duplication
* build(schema.prisma): add new `sso_user_id` to LiteLLM_UserTable
easier way to store sso id for existing user
Allows existing user added to team, to login via SSO
* test(test_auth_checks.py): add unit testing for fuzzy user object get
* fix(handle_jwt.py): fix merge conflicts
* docs(token_auth.md): clarify title
* refactor(handle_jwt.py): add jwt auth manager + refactor to handle groups
allows user to call model if user belongs to group with model access
* refactor(handle_jwt.py): refactor to first check if service call then check user call
* feat(handle_jwt.py): new `enforce_team_access` param
only allows user to call model if a team they belong to has model access
allows controlling user model access by team
* fix(handle_jwt.py): fix error string, remove unecessary param
* docs(token_auth.md): add controlling model access for jwt tokens via teams to docs
* test: fix tests post refactor
* fix: fix linting errors
* fix: fix linting error
* test: fix import error
* refactor _add_callbacks_from_db_config
* fix check for _custom_logger_exists_in_litellm_callbacks
* move loc of test utils
* run ci/cd again
* test_add_custom_logger_callback_to_specific_event_with_duplicates_callbacks
* fix _custom_logger_class_exists_in_success_callbacks
* unit testing for test_add_callbacks_from_db_config
* test_custom_logger_exists_in_callbacks_individual_functions
* fix config.yml
* fix test test_stream_chunk_builder_openai_audio_output_usage - use direct dict comparison
* docs(reliability.md): add doc on disabling fallbacks per request
* feat(litellm_pre_call_utils.py): support reading request timeout from request headers - new `x-litellm-timeout` param
Allows setting dynamic model timeouts from vercel's AI sdk
* test(test_proxy_server.py): add simple unit test for reading request timeout
* test(test_fallbacks.py): add e2e test to confirm timeout passed in request headers is correctly read
* feat(main.py): support passing metadata to openai in preview
Resolves https://github.com/BerriAI/litellm/issues/6022#issuecomment-2616119371
* fix(main.py): fix passing openai metadata
* docs(request_headers.md): document new request headers
* build: Merge branch 'main' into litellm_dev_01_27_2025_p3
* test: loosen test
* feat(handle_jwt.py): initial commit adding custom RBAC support on jwt auth
allows admin to define user role field and allowed roles which map to 'internal_user' on litellm
* fix(auth_checks.py): ensure user allowed to access model, when calling via personal keys
Fixes https://github.com/BerriAI/litellm/issues/8029
* feat(handle_jwt.py): support role based access with model permission control on proxy
Allows admin to just grant users roles on IDP (e.g. Azure AD/Keycloak) and user can immediately start calling models
* docs(rbac): add docs on rbac for model access control
make it clear how admin can use roles to control model access on proxy
* fix: fix linting errors
* test(test_user_api_key_auth.py): add unit testing to ensure rbac role is correctly enforced
* test(test_user_api_key_auth.py): add more testing
* test(test_users.py): add unit testing to ensure user model access is always checked for new keys
Resolves https://github.com/BerriAI/litellm/issues/8029
* test: fix unit test
* fix(dot_notation_indexing.py): fix typing to work with python 3.8
* fix(http_handler.py): support passing ssl verify dynamically and using the correct httpx client based on passed ssl verify param
Fixes https://github.com/BerriAI/litellm/issues/6499
* feat(llm_http_handler.py): support passing `ssl_verify=False` dynamically in call args
Closes https://github.com/BerriAI/litellm/issues/6499
* fix(proxy/utils.py): prevent bad logs from breaking all cost tracking + reset list regardless of success/failure
prevents malformed logs from causing all spend tracking to break since they're constantly retried
* test(test_proxy_utils.py): add test to ensure bad log is dropped
* test(test_proxy_utils.py): ensure in-memory spend logs reset after bad log error
* test(test_user_api_key_auth.py): add unit test to ensure end user id as str works
* fix(auth_utils.py): ensure extracted end user id is always a str
prevents db cost tracking errors
* test(test_auth_utils.py): ensure get end user id from request body always returns a string
* test: update tests
* test: skip bedrock test- behaviour now supported
* test: fix testing
* refactor(spend_tracking_utils.py): reduce size of get_logging_payload
* test: fix test
* bump: version 1.59.4 → 1.59.5
* Revert "bump: version 1.59.4 → 1.59.5"
This reverts commit 1182b46b2e.
* fix(utils.py): fix spend logs retry logic
* fix(spend_tracking_utils.py): fix get tags
* fix(spend_tracking_utils.py): fix end user id spend tracking on pass-through endpoints
* feat(main.py): add new 'provider_specific_header' param
allows passing extra header for specific provider
* fix(litellm_pre_call_utils.py): add unit test for pre call utils
* test(test_bedrock_completion.py): skip test now that bedrock supports this
* fix(utils.py): don't pass 'anthropic-beta' header to vertex - will cause request to fail
* fix(utils.py): add flag to allow user to disable filtering invalid headers
ensure user can control behaviour
* style(utils.py): cleanup message
* test(test_utils.py): add unit test to cover invalid header filtering
* fix(proxy_server.py): fix custom openapi schema generation
* fix(utils.py): pass extra headers if set
* fix(main.py): fix image variation to use 'client' param
* fix(user_dashboard.tsx): fix spend calculation when team selected
sum all team keys, not user keys
* docs(admin_ui_sso.md): fix docs tabbing
* feat(user_api_key_auth.py): introduce new 'enforce_rbac' param on jwt auth
allows proxy admin to prevent any unmapped yet authenticated jwt tokens from calling proxy
Fixes https://github.com/BerriAI/litellm/issues/6793
* test: more unit testing + refactoring
* fix: fix returning id when obj not found in db
* fix(user_api_key_auth.py): add end user id tracking from jwt auth
* docs(token_auth.md): add doc on rbac with JWTs
* fix: fix unused params
* test: remove old test
* test: initial test to enforce all functions in user_api_key_auth.py have direct testing
* test(test_user_api_key_auth.py): add is_allowed_route unit test
* test(test_user_api_key_auth.py): add more tests
* test(test_user_api_key_auth.py): add complete testing coverage for all functions in `user_api_key_auth.py`
* test(test_db_schema_changes.py): add a unit test to ensure all db schema changes are backwards compatible
gives user an easy rollback path
* test: fix schema compatibility test filepath
* test: fix test
* fix(gemini/): support gemini 'frequency_penalty' and 'presence_penalty'
Closes https://github.com/BerriAI/litellm/issues/7748
* feat(proxy_server.py): new env var to disable prisma health check on startup
* test: fix test
* fix(gpt_transformation.py): fix response_format translation check for 4o models
Fixes https://github.com/BerriAI/litellm/issues/7616
* feat(key_management_endpoints.py): support 'temp_budget_increase' and 'temp_budget_expiry' fields
Allow proxy admin to grant temporary budget increases to keys
* fix(proxy/_types.py): enforce temp_budget_increase and temp_budget_expiry are always passed together
* feat(user_api_key_auth.py): initial working temp budget increase logic
ensures key budget exceeded error checks for temp budget in key metadata
* feat(proxy_server.py): return the key max budget and key spend in the response headers
Allows clientside user to know their remaining limits
* test: add unit testing for new proxy utils
Ensures new key budget is correctly handled
* docs(temporary_budget_increase.md): add doc on temporary budget increase
* fix(utils.py): remove 3.5 from response_format check for now
not all azure 3.5 models support response_format
* fix(user_api_key_auth.py): return valid user api key auth object on all paths
* build(model_prices_and_context_window.json): add azure o1 pricing
Closes https://github.com/BerriAI/litellm/issues/7712
* refactor: replace regex with string method for whitespace check in stop-sequences handling (#7713)
* Allows overriding keep_alive time in ollama (#7079)
* Allows overriding keep_alive time in ollama
* Also adds to ollama_chat
* Adds some info on the docs about this parameter
* fix: together ai warning (#7688)
Co-authored-by: Carl Senze <carl.senze@aleph-alpha.com>
* fix(proxy_server.py): handle config containing thread locked objects when using get_config_state
* fix(proxy_server.py): add exception to debug
* build(model_prices_and_context_window.json): update 'supports_vision' for azure o1
---------
Co-authored-by: Wolfram Ravenwolf <52386626+WolframRavenwolf@users.noreply.github.com>
Co-authored-by: Regis David Souza Mesquita <github@rdsm.dev>
Co-authored-by: Carl <45709281+capsenz@users.noreply.github.com>
Co-authored-by: Carl Senze <carl.senze@aleph-alpha.com>
* feat(ui_sso.py): support reading team ids from sso token
* feat(ui_sso.py): working upsert sso user teams membership in litellm - if team exists
Adds user to relevant teams, if user is part of teams and team exists on litellm
* fix(ui_sso.py): safely handle add team member task
* build(ui/): support setting team id when creating team on UI
* build(ui/): teams.tsx
allow setting team id on ui
* build(circle_ci/requirements.txt): add fastapi-sso to ci/cd testing
* fix: fix linting errors
* feat(router.py): support request prioritization for text completion calls
* fix(internal_user_endpoints.py): fix sql query to return all keys, including null team id keys on `/user/info`
Fixes https://github.com/BerriAI/litellm/issues/7485
* fix: fix linting errors
* fix: fix linting error
* test(test_router_helper_utils.py): add direct test for '_schedule_factory'
Fixes code qa test
* fix(internal_user_endpoints.py): fix team list sort - handle team_alias being set + None
* fix(key_management_endpoints.py): allow team admin to create key for member via admin ui
Fixes https://github.com/BerriAI/litellm/issues/7482
* fix(proxy_server.py): allow querying info on specific model group via `/model_group/info`
allows client-side user to get model info from proxy
* fix(proxy_server.py): add docstring on `/model_group/info` showing how to filter by model name
* test(test_proxy_utils.py): add unit test for returning model group info filtered
* fix(proxy_server.py): fix query param
* fix(test_Get_model_info.py): handle no whitelisted bedrock modells
* feat(proxy/utils.py): get associated litellm budget from db in combined_view for key
allows user to create rate limit tiers and associate those to keys
* feat(proxy/_types.py): update the value of key-level tpm/rpm/model max budget metrics with the associated budget table values if set
allows rate limit tiers to be easily applied to keys
* docs(rate_limit_tiers.md): add doc on setting rate limit / budget tiers
make feature discoverable
* feat(key_management_endpoints.py): return litellm_budget_table value in key generate
make it easy for user to know associated budget on key creation
* fix(key_management_endpoints.py): document 'budget_id' param in `/key/generate`
* docs(key_management_endpoints.py): document budget_id usage
* refactor(budget_management_endpoints.py): refactor budget endpoints into separate file - makes it easier to run documentation testing against it
* docs(test_api_docs.py): add budget endpoints to ci/cd doc test + add missing param info to docs
* fix(customer_endpoints.py): use new pydantic obj name
* docs(user_management_heirarchy.md): add simple doc explaining teams/keys/org/users on litellm
* Litellm dev 12 26 2024 p2 (#7432)
* (Feat) Add logging for `POST v1/fine_tuning/jobs` (#7426)
* init commit ft jobs logging
* add ft logging
* add logging for FineTuningJob
* simple FT Job create test
* (docs) - show all supported Azure OpenAI endpoints in overview (#7428)
* azure batches
* update doc
* docs azure endpoints
* docs endpoints on azure
* docs azure batches api
* docs azure batches api
* fix(key_management_endpoints.py): fix key update to actually work
* test(test_key_management.py): add e2e test asserting ui key update call works
* fix: proxy/_types - fix linting erros
* test: update test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix: test
* fix(parallel_request_limiter.py): enforce tpm/rpm limits on key from tiers
* fix: fix linting errors
* test: fix test
* fix: remove unused import
* test: update test
* docs(customer_endpoints.py): document new model_max_budget param
* test: specify unique key alias
* docs(budget_management_endpoints.py): document new model_max_budget param
* test: fix test
* test: fix tests
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix(utils.py): default custom_llm_provider=None for 'supports_response_schema'
Closes https://github.com/BerriAI/litellm/issues/7397
* refactor(langfuse/): call langfuse logger inside customlogger compatible langfuse class, refactor langfuse logger to use verbose_logger.debug instead of print_verbose
* refactor(litellm_pre_call_utils.py): move config based team callbacks inside dynamic team callback logic
enables simpler unit testing for config-based team callbacks
* fix(proxy/_types.py): handle teamcallbackmetadata - none values
drop none values if present. if all none, use default dict to avoid downstream errors
* test(test_proxy_utils.py): add unit test preventing future issues - asserts team_id in config state not popped off across calls
Fixes https://github.com/BerriAI/litellm/issues/6787
* fix(langfuse_prompt_management.py): add success + failure logging event support
* fix: fix linting error
* test: fix test
* test: fix test
* test: override o1 prompt caching - openai currently not working
* test: fix test
* ui fix - allow searching model list + fix bug on filtering
* qa fix - use correct provider name for azure_text
* ui wrap content onto next line
* ui fix - allow selecting current UI session when logging in
* ui session budgets
* ui show provider models on wildcard models
* test provider name appears in model list
* ui fix auto scroll on chat ui tab
* fix(proxy_track_cost_callback.py): log to db if only end user param given
* fix: allows for jwt-auth based end user id spend tracking to work
* fix(utils.py): fix 'get_end_user_id_for_cost_tracking' to use 'user_api_key_end_user_id'
more stable - works with jwt-auth based end user tracking as well
* test(test_jwt.py): add e2e unit test to confirm end user cost tracking works for spend logs
* test: update test to use end_user api key hash param
* fix(langfuse.py): support end user cost tracking via jwt auth + langfuse
logs end user to langfuse if decoded from jwt token
* fix: fix linting errors
* test: fix test
* test: fix test
* fix: fix end user id extraction
* fix: run test earlier