* fix(litellm_proxy_extras): add baselining db script
Fixes https://github.com/BerriAI/litellm/issues/9885
* fix(prisma_client.py): fix ruff errors
* ci(config.yml): add publish_proxy_extras step
* fix(config.yml): compare contents between versions to check for changes
* fix(config.yml): fix check
* fix: install toml
* fix: update check
* fix: ensure versions in sync
* fix: fix version compare
* fix: correct the cost for 'gemini/gemini-2.5-pro-preview-03-25' (#9896)
* fix: Typo in the cost 'gemini/gemini-2.5-pro-preview-03-25', closes#9854
* chore: update in backup file as well
* Litellm add managed files db (#9930)
* fix(openai.py): ensure openai file object shows up on logs
* fix(managed_files.py): return unified file id as b64 str
allows retrieve file id to work as expected
* fix(managed_files.py): apply decoded file id transformation
* fix: add unit test for file id + decode logic
* fix: initial commit for litellm_proxy support with CRUD Endpoints
* fix(managed_files.py): support retrieve file operation
* fix(managed_files.py): support for DELETE endpoint for files
* fix(managed_files.py): retrieve file content support
supports retrieve file content api from openai
* fix: fix linting error
* test: update tests
* fix: fix linting error
* feat(managed_files.py): support reading / writing files in DB
* feat(managed_files.py): support deleting file from DB on delete
* test: update testing
* fix(spend_tracking_utils.py): ensure each file create request is logged correctly
* fix(managed_files.py): fix storing / returning managed file object from cache
* fix(files/main.py): pass litellm params to azure route
* test: fix test
* build: add new prisma migration
* build: bump requirements
* test: add more testing
* refactor: cleanup post merge w/ main
* fix: fix code qa errors
* [DB / Infra] Add new column team_member_permissions (#9941)
* add team_member_permissions to team table
* add migration.sql file
* fix poetry lock
* fix prisma migrations
* fix poetry lock
* fix migration
* ui new build
* fix(factory.py): correct indentation for message index increment in ollama, This fixes bug #9822 (#9943)
* fix(factory.py): correct indentation for message index increment in ollama_pt function
* test: add unit tests for ollama_pt function handling various message types
* ci: update test
* fix: fix check
* ci: see what dir looks like
* ci: more checks
* ci: fix filepath
* ci: cleanup
* ci: fix ci
---------
Co-authored-by: Nilanjan De <nilanjan.de@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Dan Shaw <dan@danieljshaw.com>
* fix(factory.py): correct indentation for message index increment in ollama_pt function
* test: add unit tests for ollama_pt function handling various message types
* fix(openai.py): ensure openai file object shows up on logs
* fix(managed_files.py): return unified file id as b64 str
allows retrieve file id to work as expected
* fix(managed_files.py): apply decoded file id transformation
* fix: add unit test for file id + decode logic
* fix: initial commit for litellm_proxy support with CRUD Endpoints
* fix(managed_files.py): support retrieve file operation
* fix(managed_files.py): support for DELETE endpoint for files
* fix(managed_files.py): retrieve file content support
supports retrieve file content api from openai
* fix: fix linting error
* test: update tests
* fix: fix linting error
* feat(managed_files.py): support reading / writing files in DB
* feat(managed_files.py): support deleting file from DB on delete
* test: update testing
* fix(spend_tracking_utils.py): ensure each file create request is logged correctly
* fix(managed_files.py): fix storing / returning managed file object from cache
* fix(files/main.py): pass litellm params to azure route
* test: fix test
* build: add new prisma migration
* build: bump requirements
* test: add more testing
* refactor: cleanup post merge w/ main
* fix: fix code qa errors
* fix(openai.py): ensure openai file object shows up on logs
* fix(managed_files.py): return unified file id as b64 str
allows retrieve file id to work as expected
* fix(managed_files.py): apply decoded file id transformation
* fix: add unit test for file id + decode logic
* fix: initial commit for litellm_proxy support with CRUD Endpoints
* fix(managed_files.py): support retrieve file operation
* fix(managed_files.py): support for DELETE endpoint for files
* fix(managed_files.py): retrieve file content support
supports retrieve file content api from openai
* fix: fix linting error
* test: update tests
* fix: fix linting error
* fix(files/main.py): pass litellm params to azure route
* test: fix test
* add team_member_permissions
* add GetTeamMemberPermissionsRequest types
* crud endpoint for team member permissions
* test team member permissions CRUD
* fix GetTeamMemberPermissionsRequest
* endpoint for updating default team settings on ui
* add GET default team settings endpoint
* ui expose default team settings on UI
* update to use DefaultTeamSSOParams
* DefaultTeamSSOParams
* fix DefaultTeamSSOParams
* docs team management
* test_update_default_team_settings
* feat(managed_files.py): encode file type in unified file id
simplify calling gemini models
* fix(common_utils.py): fix extracting file type from unified file id
* fix(litellm_logging.py): create standard logging payload for create file call
* fix: fix linting error
* refactor(litellm_logging.py): refactor realtime cost tracking to use common code as rest
Ensures basic features like base model just work
* feat(realtime/): support 'base_model' cost tracking on realtime api
Fixes issue where base model was not working on realtime
* fix: fix ruff linting error
* test: fix test
* fix(cost_calculator.py): handle custom pricing at deployment level for router
* test: add unit tests
* fix(router.py): show custom pricing on UI
check correct model str
* fix: fix linting error
* docs(custom_pricing.md): clarify custom pricing for proxy
Fixes https://github.com/BerriAI/litellm/issues/8573#issuecomment-2790420740
* test: update code qa test
* fix: cleanup traceback
* fix: handle litellm param custom pricing
* test: update test
* fix(cost_calculator.py): add router model id to list of potential model names
* fix(cost_calculator.py): fix router model id check
* fix: router.py - maintain older model registry approach
* fix: fix ruff check
* fix(router.py): router get deployment info
add custom values to mapped dict
* test: update test
* fix(utils.py): update only if value is non-null
* test: add unit test