* Add date picker to usage tab + Add reasoning_content token tracking across all providers on streaming (#9722)
* feat(new_usage.tsx): add date picker for new usage tab
allow user to look back on their usage data
* feat(anthropic/chat/transformation.py): report reasoning tokens in completion token details
allows usage tracking on how many reasoning tokens are actually being used
* feat(streaming_chunk_builder.py): return reasoning_tokens in anthropic/openai streaming response
allows tracking reasoning_token usage across providers
* Fix update team metadata + fix bulk adding models on Ui (#9721)
* fix(handle_add_model_submit.tsx): fix bulk adding models
* fix(team_info.tsx): fix team metadata update
Fixes https://github.com/BerriAI/litellm/issues/9689
* (v0) Unified file id - allow calling multiple providers with same file id (#9718)
* feat(files_endpoints.py): initial commit adding 'target_model_names' support
allow developer to specify all the models they want to call with the file
* feat(files_endpoints.py): return unified files endpoint
* test(test_files_endpoints.py): add validation test - if invalid purpose submitted
* feat: more updates
* feat: initial working commit of unified file id translation
* fix: additional fixes
* fix(router.py): remove model replace logic in jsonl on acreate_file
enables file upload to work for chat completion requests as well
* fix(files_endpoints.py): remove whitespace around model name
* fix(azure/handler.py): return acreate_file with correct response type
* fix: fix linting errors
* test: fix mock test to run on github actions
* fix: fix ruff errors
* fix: fix file too large error
* fix(utils.py): remove redundant var
* test: modify test to work on github actions
* test: update tests
* test: more debug logs to understand ci/cd issue
* test: fix test for respx
* test: skip mock respx test
fails on ci/cd - not clear why
* fix: fix ruff check
* fix: fix test
* fix(model_connection_test.tsx): fix linting error
* test: update unit tests
* feat(internal_user_endpoints.py): return 'total_tokens' in `/user/daily/analytics`
* test(test_internal_user_endpoints.py): add unit test to assert spend metrics and dailyspend metadata always report the same fields
* build(schema.prisma): record success + failure calls to daily user table
allows understanding why model requests might exceed provider requests (e.g. user hit rate limit error)
* fix(internal_user_endpoints.py): report success / failure requests in API
* fix(proxy/utils.py): default to success
status can be missing or none at times for successful requests
* feat(new_usage.tsx): show success/failure calls on UI
* style(new_usage.tsx): ui cleanup
* fix: fix linting error
* fix: fix linting error
* feat(litellm-proxy-extras/): add new migration files
* fix(proxy_server.py): get master key from environment, if not set in general settings or general settings not set at all
* test: mark flaky test
* test(test_proxy_server.py): mock prisma client
* ci: add new github workflow for testing just the mock tests
* fix: fix linting error
* ci(conftest.py): add conftest.py to isolate proxy tests
* build(pyproject.toml): add respx to dev dependencies
* build(pyproject.toml): add prisma to dev dependencies
* test: fix mock prompt management tests to use a mock anthropic key
* ci(test-litellm.yml): parallelize mock testing
make it run faster
* build(pyproject.toml): add hypercorn as dev dep
* build(pyproject.toml): separate proxy vs. core dev dependencies
make it easier for non-proxy contributors to run tests locally - e.g. no need to install hypercorn
* ci(test-litellm.yml): pin python version
* test(test_rerank.py): move test - cannot be mocked, requires aws credentials for e2e testing
* ci: add thank you message to ci
* test: add mock env var to test
* test: add autouse to tests
* test: test mock env vars for e2e tests
* build: new ui build
* build: new ui build
* fix(proxy_server.py): only show user models their key can access on `/models`
* fix(model_management_endpoints.py): ensure team admin can add models
* test: update unit testing to reflect changes
* fix(model_dashboard.tsx): fix sizing on models page
* build: fix ui
* feat(view_logs.tsx): show model id + api base in request logs
easier debugging
* fix(index.tsx): fix length of api base
easier viewing
* refactor(leftnav.tsx): show models tab to team admin
* feat(model_dashboard.tsx): add explainer for what the 'models' page is for team admin
helps them understand how they can use it
* feat(model_management_endpoints.py): restrict model add by team to just team admin
allow team admin to add models via non-team keys (e.g. ui token)
* test(test_add_update_models.py): update unit testing for new behaviour
* fix(model_dashboard.tsx): show user the models
* feat(proxy_server.py): add new query param 'user_models_only' to `/v2/model/info`
Allows user to retrieve just the models they've added
Used in UI to show internal users just the models they've added
* feat(model_dashboard.tsx): allow team admins to view their own models
* fix: allow ui user to fetch model cost map
* feat(add_model_tab.tsx): require team admins to specify team when onboarding models
* fix(_types.py): add `/v1/model/info` to info route
`/model/info` was already there
* fix(model_info_view.tsx): allow user to edit a model they created
* fix(model_management_endpoints.py): allow team admin to update team model
* feat(model_managament_endpoints.py): allow team admin to delete team models
* fix(model_management_endpoints.py): don't require team id to be set when adding a model
* fix(proxy_server.py): fix linting error
* fix: fix ui linting error
* fix(model_management_endpoints.py): ensure consistent auth checks on all model calls
* test: remove old test - function no longer exists in same form
* test: add updated mock testing
* refactor: introduce new transformation config for gpt-4o-transcribe models
* refactor: expose new transformation configs for audio transcription
* ci: fix config yml
* feat(openai/transcriptions): support provider config transformation on openai audio transcriptions
allows gpt-4o and whisper audio transformation to work as expected
* refactor: migrate fireworks ai + deepgram to new transform request pattern
* feat(openai/): working support for gpt-4o-audio-transcribe
* build(model_prices_and_context_window.json): add gpt-4o-transcribe to model cost map
* build(model_prices_and_context_window.json): specify what endpoints are supported for `/audio/transcriptions`
* fix(get_supported_openai_params.py): fix return
* refactor(deepgram/): migrate unit test to deepgram handler
* refactor: cleanup unused imports
* fix(get_supported_openai_params.py): fix linting error
* test: update test