* refactor(db_spend_update_writer.py): aggregate table is entirely different
* test(test_db_spend_update_writer.py): add unit test to ensure if disable_spend_logs is true daily user transactions is still logged
* test: fix test
* fix(internal_user_endpoints.py): cleanup unused variables on beta endpoint
no team/org split on daily user endpoint
* build(model_prices_and_context_window.json): gemini-2.0-flash supports audio input
* feat(gemini/transformation.py): support passing audio input to gemini
* test: fix test
* fix(gemini/transformation.py): support audio input as a url
enables passing google cloud bucket urls
* fix(gemini/transformation.py): support explicitly passing format of file
* fix(gemini/transformation.py): expand support for inferred file types from url
* fix(sagemaker/completion/transformation.py): fix special token error when counting sagemaker tokens
* test: fix import
* build(README.md): initial commit adding a separate folder for additional proxy files. Meant to reduce size of core package
* build(litellm-proxy-extras/): new pip package for storing migration files
allows litellm proxy to use migration files, without adding them to core repo
* build(litellm-proxy-extras/): cleanup pyproject.toml
* build: move prisma migration files inside new proxy extras package
* build(run_migration.py): update script to write to correct folder
* build(proxy_cli.py): load in migration files from litellm-proxy-extras
Closes https://github.com/BerriAI/litellm/issues/9558
* build: add MIT license to litellm-proxy-extras
* test: update test
* fix: fix schema
* bump: version 0.1.0 → 0.1.1
* build(publish-proxy-extras.sh): add script for publishing new proxy-extras version
* build(liccheck.ini): add litellm-proxy-extras to authorized packages
* fix(litellm-proxy-extras/utils.py): move prisma migrate logic inside extra proxy pkg
easier since migrations folder already there
* build(pre-commit-config.yaml): add litellm_proxy_extras to ci tests
* docs(config_settings.md): document new env var
* build(pyproject.toml): bump relevant files when litellm-proxy-extras version changed
* build(pre-commit-config.yaml): run poetry check on litellm-proxy-extras as well
* fix(create_user_button.tsx): allow admin to set models user has access to, on invite
Enables controlling model access on invite
* feat(auth_checks.py): enforce 'no-model-access' special model name on backend
prevent user from calling models if default key has no model access
* fix(chat_ui.tsx): allow user to input custom model
* fix(chat_ui.tsx): pull available models based on models key has access to
* style(create_user_button.tsx): move default model inside 'personal key creation' accordion
* fix(chat_ui.tsx): fix linting error
* test(test_auth_checks.py): add unit-test for special model name
* docs(internal_user_endpoints.py): update docstring
* fix test_moderations_bad_model
* Litellm dev 02 27 2025 p6 (#8891)
* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads
* fix(sagemaker/handler.py): support passing model id on async streaming
* fix(litellm_pre_call_utils.py): Fixes https://github.com/BerriAI/litellm/issues/7237
* Fix calling claude via invoke route + response_format support for claude on invoke route (#8908)
* fix(anthropic_claude3_transformation.py): fix amazon anthropic claude 3 tool calling transformation on invoke route
move to using anthropic config as base
* fix(utils.py): expose anthropic config via providerconfigmanager
* fix(llm_http_handler.py): support json mode on async completion calls
* fix(invoke_handler/make_call): support json mode for anthropic called via bedrock invoke
* fix(anthropic/): handle 'response_format: {"type": "text"}` + migrate amazon claude 3 invoke config to inherit from anthropic config
Prevents error when passing in 'response_format: {"type": "text"}
* test: fix test
* fix(utils.py): fix base invoke provider check
* fix(anthropic_claude3_transformation.py): don't pass 'stream' param
* fix: fix linting errors
* fix(converse_transformation.py): handle response_format type=text for converse
* converse_transformation: pass 'description' if set in response_format (#8907)
* test(test_bedrock_completion.py): e2e test ensuring tool description is passed in
* fix(converse_transformation.py): pass description, if set
* fix(transformation.py): Fixes https://github.com/BerriAI/litellm/issues/8767#issuecomment-2689887663
* Fix bedrock passing `response_format: {"type": "text"}` (#8900)
* fix(converse_transformation.py): ignore type: text, value in response_format
no-op for bedrock
* fix(converse_transformation.py): handle adding response format value to tools
* fix(base_invoke_transformation.py): fix 'get_bedrock_invoke_provider' to handle cross-region-inferencing models
* test(test_bedrock_completion.py): add unit testing for bedrock invoke provider logic
* test: update test
* fix(exception_mapping_utils.py): add context window exceeded error handling for databricks provider route
* fix(fireworks_ai/): support passing tools + response_format together
* fix: cleanup
* fix(base_invoke_transformation.py): fix imports
* (Feat) - Show Error Logs on LiteLLM UI (#8904)
* fix test_moderations_bad_model
* use async_post_call_failure_hook
* basic logging errors in DB
* show status on ui
* show status on ui
* ui show request / response side by side
* stash fixes
* working, track raw request
* track error info in metadata
* fix showing error / request / response logs
* show traceback on error viewer
* ui with traceback of error
* fix async_post_call_failure_hook
* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads
* test_get_error_information
* fix code quality
* rename proxy track cost callback test
* _should_store_errors_in_spend_logs
* feature flag error logs
* Revert "_should_store_errors_in_spend_logs"
This reverts commit 7f345df477.
* Revert "feature flag error logs"
This reverts commit 0e90c022bb.
* test_spend_logs_payload
* fix OTEL log_db_metrics
* fix import json
* fix ui linting error
* test_async_post_call_failure_hook
* test_chat_completion_bad_model_with_spend_logs
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* ui new build
* test_chat_completion_bad_model_with_spend_logs
* docs(release_cycle.md): document release cycle
* bump: version 1.62.0 → 1.62.1
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix test_moderations_bad_model
* use async_post_call_failure_hook
* basic logging errors in DB
* show status on ui
* show status on ui
* ui show request / response side by side
* stash fixes
* working, track raw request
* track error info in metadata
* fix showing error / request / response logs
* show traceback on error viewer
* ui with traceback of error
* fix async_post_call_failure_hook
* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads
* test_get_error_information
* fix code quality
* rename proxy track cost callback test
* _should_store_errors_in_spend_logs
* feature flag error logs
* Revert "_should_store_errors_in_spend_logs"
This reverts commit 7f345df477.
* Revert "feature flag error logs"
This reverts commit 0e90c022bb.
* test_spend_logs_payload
* fix OTEL log_db_metrics
* fix import json
* fix ui linting error
* test_async_post_call_failure_hook
* test_chat_completion_bad_model_with_spend_logs
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* fix(user_api_key_auth.py): Fixes https://github.com/BerriAI/litellm/issues/8780
security fix - enforce model access checks on azure routes
* test(test_user_api_key_auth.py): add unit testing
* test(test_openai_endpoints.py): add e2e test to ensure azure routes also run through model validation checks