* feat(schema.prisma): initial commit adding aggregate table for team spend
allows team spend to be visible at 1m+ logs
* feat(db_spend_update_writer.py): support logging aggregate team spend
allows usage dashboard to work at 1m+ logs
* feat(litellm-proxy-extras/): add new migration file
* fix(db_spend_update_writer.py): fix return type
* build: bump requirements
* fix: fix ruff error
* fix(openai.py): ensure openai file object shows up on logs
* fix(managed_files.py): return unified file id as b64 str
allows retrieve file id to work as expected
* fix(managed_files.py): apply decoded file id transformation
* fix: add unit test for file id + decode logic
* fix: initial commit for litellm_proxy support with CRUD Endpoints
* fix(managed_files.py): support retrieve file operation
* fix(managed_files.py): support for DELETE endpoint for files
* fix(managed_files.py): retrieve file content support
supports retrieve file content api from openai
* fix: fix linting error
* test: update tests
* fix: fix linting error
* feat(managed_files.py): support reading / writing files in DB
* feat(managed_files.py): support deleting file from DB on delete
* test: update testing
* fix(spend_tracking_utils.py): ensure each file create request is logged correctly
* fix(managed_files.py): fix storing / returning managed file object from cache
* fix(files/main.py): pass litellm params to azure route
* test: fix test
* build: add new prisma migration
* build: bump requirements
* test: add more testing
* refactor: cleanup post merge w/ main
* fix: fix code qa errors
* fix(openai.py): ensure openai file object shows up on logs
* fix(managed_files.py): return unified file id as b64 str
allows retrieve file id to work as expected
* fix(managed_files.py): apply decoded file id transformation
* fix: add unit test for file id + decode logic
* fix: initial commit for litellm_proxy support with CRUD Endpoints
* fix(managed_files.py): support retrieve file operation
* fix(managed_files.py): support for DELETE endpoint for files
* fix(managed_files.py): retrieve file content support
supports retrieve file content api from openai
* fix: fix linting error
* test: update tests
* fix: fix linting error
* fix(files/main.py): pass litellm params to azure route
* test: fix test
* add team_member_permissions
* add GetTeamMemberPermissionsRequest types
* crud endpoint for team member permissions
* test team member permissions CRUD
* fix GetTeamMemberPermissionsRequest
* endpoint for updating default team settings on ui
* add GET default team settings endpoint
* ui expose default team settings on UI
* update to use DefaultTeamSSOParams
* DefaultTeamSSOParams
* fix DefaultTeamSSOParams
* docs team management
* test_update_default_team_settings
* feat(managed_files.py): encode file type in unified file id
simplify calling gemini models
* fix(common_utils.py): fix extracting file type from unified file id
* fix(litellm_logging.py): create standard logging payload for create file call
* fix: fix linting error
* refactor(litellm_logging.py): refactor realtime cost tracking to use common code as rest
Ensures basic features like base model just work
* feat(realtime/): support 'base_model' cost tracking on realtime api
Fixes issue where base model was not working on realtime
* fix: fix ruff linting error
* test: fix test
* fix(cost_calculator.py): handle custom pricing at deployment level for router
* test: add unit tests
* fix(router.py): show custom pricing on UI
check correct model str
* fix: fix linting error
* docs(custom_pricing.md): clarify custom pricing for proxy
Fixes https://github.com/BerriAI/litellm/issues/8573#issuecomment-2790420740
* test: update code qa test
* fix: cleanup traceback
* fix: handle litellm param custom pricing
* test: update test
* fix(cost_calculator.py): add router model id to list of potential model names
* fix(cost_calculator.py): fix router model id check
* fix: router.py - maintain older model registry approach
* fix: fix ruff check
* fix(router.py): router get deployment info
add custom values to mapped dict
* test: update test
* fix(utils.py): update only if value is non-null
* test: add unit test
* rendering tags on UI
* use /models for building tags
* CRUD endpoints for Tag management
* fix tag management
* working api for LIST tags
* working tag management
* refactor UI components
* fixes ui tag management
* clean up ui tag management
* fix tag management ui
* fix show allowed llms
* e2e tag controls
* stash change for rendering tags on UI
* ui working tag selector on Test Key page
* fixes for tag management
* clean up tag info
* fix code quality
* test for tag management
* ui clarify what tag routing is
* test: move test to just checking async
* fix(transformation.py): handle function call with no schema
* fix(utils.py): handle pydantic base model in message tool calls
Fix https://github.com/BerriAI/litellm/issues/9321
* fix(vertex_and_google_ai_studio.py): handle tools=[]
Fixes https://github.com/BerriAI/litellm/issues/9080
* test: remove max token restriction
* test: fix basic test
* fix(get_supported_openai_params.py): fix check
* fix(converse_transformation.py): support fake streaming for meta.llama3-3-70b-instruct-v1:0
* fix: fix test
* fix: parse out empty dictionary on dbrx streaming + tool calls
* fix(handle-'strict'-param-when-calling-fireworks-ai): fireworks ai does not support 'strict' param
* fix: fix ruff check
'
* fix: handle no strict in function
* fix: revert bedrock change - handle in separate PR
* fix(vertex_ai.py): common_utils.py
move to only passing in accepted keys by vertex ai
prevent json schema compatible keys like $id, and $comment from causing vertex ai openapi calls to fail
* fix(test_vertex.py): add testing to ensure only accepted schema params passed in
* fix(common_utils.py): fix linting error
* test: update test
* test: accept function
* fix(router.py): support reusable credentials via passthrough router
enables reusable vertex credentials to be used in passthrough
* test: fix test
* test(test_router_adding_deployments.py): add unit testing
* Add date picker to usage tab + Add reasoning_content token tracking across all providers on streaming (#9722)
* feat(new_usage.tsx): add date picker for new usage tab
allow user to look back on their usage data
* feat(anthropic/chat/transformation.py): report reasoning tokens in completion token details
allows usage tracking on how many reasoning tokens are actually being used
* feat(streaming_chunk_builder.py): return reasoning_tokens in anthropic/openai streaming response
allows tracking reasoning_token usage across providers
* Fix update team metadata + fix bulk adding models on Ui (#9721)
* fix(handle_add_model_submit.tsx): fix bulk adding models
* fix(team_info.tsx): fix team metadata update
Fixes https://github.com/BerriAI/litellm/issues/9689
* (v0) Unified file id - allow calling multiple providers with same file id (#9718)
* feat(files_endpoints.py): initial commit adding 'target_model_names' support
allow developer to specify all the models they want to call with the file
* feat(files_endpoints.py): return unified files endpoint
* test(test_files_endpoints.py): add validation test - if invalid purpose submitted
* feat: more updates
* feat: initial working commit of unified file id translation
* fix: additional fixes
* fix(router.py): remove model replace logic in jsonl on acreate_file
enables file upload to work for chat completion requests as well
* fix(files_endpoints.py): remove whitespace around model name
* fix(azure/handler.py): return acreate_file with correct response type
* fix: fix linting errors
* test: fix mock test to run on github actions
* fix: fix ruff errors
* fix: fix file too large error
* fix(utils.py): remove redundant var
* test: modify test to work on github actions
* test: update tests
* test: more debug logs to understand ci/cd issue
* test: fix test for respx
* test: skip mock respx test
fails on ci/cd - not clear why
* fix: fix ruff check
* fix: fix test
* fix(model_connection_test.tsx): fix linting error
* test: update unit tests
* test: fix import for test
* fix: fix bad error string
* docs: cleanup files docs
* fix(files/main.py): cleanup error string
* style: initial commit with a provider/config pattern for files api
google ai studio files api onboarding
* fix: test
* feat(gemini/files/transformation.py): support gemini files api response transformation
* fix(gemini/files/transformation.py): return file id as gemini uri
allows id to be passed in to chat completion request, just like openai
* feat(llm_http_handler.py): support async route for files api on llm_http_handler
* fix: fix linting errors
* fix: fix model info check
* fix: fix ruff errors
* fix: fix linting errors
* Revert "fix: fix linting errors"
This reverts commit 926a5a527f.
* fix: fix linting errors
* test: fix test
* test: fix tests
* fix(internal_user_endpoints.py): cleanup unused variables on beta endpoint
no team/org split on daily user endpoint
* build(model_prices_and_context_window.json): gemini-2.0-flash supports audio input
* feat(gemini/transformation.py): support passing audio input to gemini
* test: fix test
* fix(gemini/transformation.py): support audio input as a url
enables passing google cloud bucket urls
* fix(gemini/transformation.py): support explicitly passing format of file
* fix(gemini/transformation.py): expand support for inferred file types from url
* fix(sagemaker/completion/transformation.py): fix special token error when counting sagemaker tokens
* test: fix import