* feat(sidebars): add new item for agentops integration in Logging & Observability category
* Update agentops_integration.md to enhance title formatting and remove redundant section
* Enhance AgentOps integration in documentation and codebase by removing LiteLLMCallbackHandler references, adding environment variable configurations, and updating logging initialization for AgentOps support.
* Update AgentOps integration documentation to include instructions for obtaining API keys and clarify environment variable setup.
* Add unit tests for AgentOps integration and improve error handling in token fetching
* Add unit tests for AgentOps configuration and token fetching functionality
* Corrected agentops test directory
* Linting fix
* chore: add OpenTelemetry dependencies to pyproject.toml
* chore: update OpenTelemetry dependencies and add new packages in pyproject.toml and poetry.lock
* build(model_prices_and_context_window.json): add vertex ai gemini-2.5-flash pricing
* build(model_prices_and_context_window.json): add gemini reasoning token pricing
* fix(vertex_and_google_ai_studio_gemini.py): support counting thinking tokens for gemini
allows accurate cost calc
* fix(utils.py): add reasoning token cost calc to generic cost calc
ensures gemini-2.5-flash cost calculation is accurate
* build(model_prices_and_context_window.json): mark gemini-2.5-flash as 'supports_reasoning'
* feat(gemini/): support 'thinking' + 'reasoning_effort' params + new unit tests
allow controlling thinking effort for gemini-2.5-flash models
* test: update unit testing
* feat(vertex_and_google_ai_studio_gemini.py): return reasoning content if given in gemini response
* test: update model name
* fix: fix ruff check
* test(test_spend_management_endpoints.py): update tests to be less sensitive to new keys / updates to usage object
* fix(vertex_and_google_ai_studio_gemini.py): fix translation
* fix(model_info_view.tsx): cleanup text
* fix(key_management_endpoints.py): fix filtering litellm-dashboard keys for internal users
* fix(proxy_track_cost_callback.py): prevent flooding spend logs with admin endpoint errors
* test: add unit testing for logic
* test(test_auth_exception_handler.py): add more unit testing
* fix(router.py): correctly handle retrieving model info on get_model_group_info
fixes issue where model hub was showing None prices
* fix: fix linting errors
* fix(factory.py): correct indentation for message index increment in ollama_pt function
* test: add unit tests for ollama_pt function handling various message types
* fix(openai.py): ensure openai file object shows up on logs
* fix(managed_files.py): return unified file id as b64 str
allows retrieve file id to work as expected
* fix(managed_files.py): apply decoded file id transformation
* fix: add unit test for file id + decode logic
* fix: initial commit for litellm_proxy support with CRUD Endpoints
* fix(managed_files.py): support retrieve file operation
* fix(managed_files.py): support for DELETE endpoint for files
* fix(managed_files.py): retrieve file content support
supports retrieve file content api from openai
* fix: fix linting error
* test: update tests
* fix: fix linting error
* feat(managed_files.py): support reading / writing files in DB
* feat(managed_files.py): support deleting file from DB on delete
* test: update testing
* fix(spend_tracking_utils.py): ensure each file create request is logged correctly
* fix(managed_files.py): fix storing / returning managed file object from cache
* fix(files/main.py): pass litellm params to azure route
* test: fix test
* build: add new prisma migration
* build: bump requirements
* test: add more testing
* refactor: cleanup post merge w/ main
* fix: fix code qa errors
* fix(openai.py): ensure openai file object shows up on logs
* fix(managed_files.py): return unified file id as b64 str
allows retrieve file id to work as expected
* fix(managed_files.py): apply decoded file id transformation
* fix: add unit test for file id + decode logic
* fix: initial commit for litellm_proxy support with CRUD Endpoints
* fix(managed_files.py): support retrieve file operation
* fix(managed_files.py): support for DELETE endpoint for files
* fix(managed_files.py): retrieve file content support
supports retrieve file content api from openai
* fix: fix linting error
* test: update tests
* fix: fix linting error
* fix(files/main.py): pass litellm params to azure route
* test: fix test
* feat(managed_files.py): encode file type in unified file id
simplify calling gemini models
* fix(common_utils.py): fix extracting file type from unified file id
* fix(litellm_logging.py): create standard logging payload for create file call
* fix: fix linting error
* refactor(litellm_logging.py): refactor realtime cost tracking to use common code as rest
Ensures basic features like base model just work
* feat(realtime/): support 'base_model' cost tracking on realtime api
Fixes issue where base model was not working on realtime
* fix: fix ruff linting error
* test: fix test
* fix(cost_calculator.py): handle custom pricing at deployment level for router
* test: add unit tests
* fix(router.py): show custom pricing on UI
check correct model str
* fix: fix linting error
* docs(custom_pricing.md): clarify custom pricing for proxy
Fixes https://github.com/BerriAI/litellm/issues/8573#issuecomment-2790420740
* test: update code qa test
* fix: cleanup traceback
* fix: handle litellm param custom pricing
* test: update test
* fix(cost_calculator.py): add router model id to list of potential model names
* fix(cost_calculator.py): fix router model id check
* fix: router.py - maintain older model registry approach
* fix: fix ruff check
* fix(router.py): router get deployment info
add custom values to mapped dict
* test: update test
* fix(utils.py): update only if value is non-null
* test: add unit test
* fix(litellm_proxy/chat/transformation.py): support 'thinking' param
Fixes https://github.com/BerriAI/litellm/issues/9380
* feat(azure/gpt_transformation.py): add azure audio model support
Closes https://github.com/BerriAI/litellm/issues/6305
* fix(utils.py): use provider_config in common functions
* fix(utils.py): add missing provider configs to get_chat_provider_config
* test: fix test
* fix: fix path
* feat(utils.py): make bedrock invoke nova config baseconfig compatible
* fix: fix linting errors
* fix(azure_ai/transformation.py): remove buggy optional param filtering for azure ai
Removes incorrect check for support tool choice when calling azure ai - prevented calling models with response_format unless on litell model cost map
* fix(amazon_cohere_transformation.py): fix bedrock invoke cohere transformation to inherit from coherechatconfig
* test: fix azure ai tool choice mapping
* fix: fix model cost map to add 'supports_tool_choice' to cohere models
* fix(get_supported_openai_params.py): check if custom llm provider in llm providers
* fix(get_supported_openai_params.py): fix llm provider in list check
* fix: fix ruff check errors
* fix: support defs when calling bedrock nova
* fix(factory.py): fix test
* fix(router.py): support reusable credentials via passthrough router
enables reusable vertex credentials to be used in passthrough
* test: fix test
* test(test_router_adding_deployments.py): add unit testing
* Add date picker to usage tab + Add reasoning_content token tracking across all providers on streaming (#9722)
* feat(new_usage.tsx): add date picker for new usage tab
allow user to look back on their usage data
* feat(anthropic/chat/transformation.py): report reasoning tokens in completion token details
allows usage tracking on how many reasoning tokens are actually being used
* feat(streaming_chunk_builder.py): return reasoning_tokens in anthropic/openai streaming response
allows tracking reasoning_token usage across providers
* Fix update team metadata + fix bulk adding models on Ui (#9721)
* fix(handle_add_model_submit.tsx): fix bulk adding models
* fix(team_info.tsx): fix team metadata update
Fixes https://github.com/BerriAI/litellm/issues/9689
* (v0) Unified file id - allow calling multiple providers with same file id (#9718)
* feat(files_endpoints.py): initial commit adding 'target_model_names' support
allow developer to specify all the models they want to call with the file
* feat(files_endpoints.py): return unified files endpoint
* test(test_files_endpoints.py): add validation test - if invalid purpose submitted
* feat: more updates
* feat: initial working commit of unified file id translation
* fix: additional fixes
* fix(router.py): remove model replace logic in jsonl on acreate_file
enables file upload to work for chat completion requests as well
* fix(files_endpoints.py): remove whitespace around model name
* fix(azure/handler.py): return acreate_file with correct response type
* fix: fix linting errors
* test: fix mock test to run on github actions
* fix: fix ruff errors
* fix: fix file too large error
* fix(utils.py): remove redundant var
* test: modify test to work on github actions
* test: update tests
* test: more debug logs to understand ci/cd issue
* test: fix test for respx
* test: skip mock respx test
fails on ci/cd - not clear why
* fix: fix ruff check
* fix: fix test
* fix(model_connection_test.tsx): fix linting error
* test: update unit tests
* build(pyproject.toml): add new dev dependencies - for type checking
* build: reformat files to fit black
* ci: reformat to fit black
* ci(test-litellm.yml): make tests run clear
* build(pyproject.toml): add ruff
* fix: fix ruff checks
* build(mypy/): fix mypy linting errors
* fix(hashicorp_secret_manager.py): fix passing cert for tls auth
* build(mypy/): resolve all mypy errors
* test: update test
* fix: fix black formatting
* build(pre-commit-config.yaml): use poetry run black
* fix(proxy_server.py): fix linting error
* fix: fix ruff safe representation error
* refactor: introduce new transformation config for gpt-4o-transcribe models
* refactor: expose new transformation configs for audio transcription
* ci: fix config yml
* feat(openai/transcriptions): support provider config transformation on openai audio transcriptions
allows gpt-4o and whisper audio transformation to work as expected
* refactor: migrate fireworks ai + deepgram to new transform request pattern
* feat(openai/): working support for gpt-4o-audio-transcribe
* build(model_prices_and_context_window.json): add gpt-4o-transcribe to model cost map
* build(model_prices_and_context_window.json): specify what endpoints are supported for `/audio/transcriptions`
* fix(get_supported_openai_params.py): fix return
* refactor(deepgram/): migrate unit test to deepgram handler
* refactor: cleanup unused imports
* fix(get_supported_openai_params.py): fix linting error
* test: update test
* fix(vertex_and_google_ai_studio_gemini.py): log gemini audio tokens in usage object
enables accurate cost tracking
* refactor(vertex_ai/cost_calculator.py): refactor 128k+ token cost calculation to only run if model info has it
Google has moved away from this for gemini-2.0 models
* refactor(vertex_ai/cost_calculator.py): migrate to usage object for more flexible data passthrough
* fix(llm_cost_calc/utils.py): support audio token cost tracking in generic cost per token
enables vertex ai cost tracking to work with audio tokens
* fix(llm_cost_calc/utils.py): default to total prompt tokens if text tokens field not set
* refactor(llm_cost_calc/utils.py): move openai cost tracking to generic cost per token
more consistent behaviour across providers
* test: add unit test for gemini audio token cost calculation
* ci: bump ci config
* test: fix test