* refactor(vertex_llm_base.py): Prevent credential misrouting for projects
Fixes https://github.com/BerriAI/litellm/issues/7904
* fix: passing unit tests
* fix(vertex_llm_base.py): common auth logic across sync + async vertex ai calls
prevents credential caching issue across both flows
* test: fix test
* fix(vertex_llm_base.py): handle project id in default cause
* fix(factory.py): don't pass cache control if not set
bedrock invoke does not support this
* test: fix test
* fix(vertex_llm_base.py): add .exception message in load_auth
* fix: fix ruff error
* Add date picker to usage tab + Add reasoning_content token tracking across all providers on streaming (#9722)
* feat(new_usage.tsx): add date picker for new usage tab
allow user to look back on their usage data
* feat(anthropic/chat/transformation.py): report reasoning tokens in completion token details
allows usage tracking on how many reasoning tokens are actually being used
* feat(streaming_chunk_builder.py): return reasoning_tokens in anthropic/openai streaming response
allows tracking reasoning_token usage across providers
* Fix update team metadata + fix bulk adding models on Ui (#9721)
* fix(handle_add_model_submit.tsx): fix bulk adding models
* fix(team_info.tsx): fix team metadata update
Fixes https://github.com/BerriAI/litellm/issues/9689
* (v0) Unified file id - allow calling multiple providers with same file id (#9718)
* feat(files_endpoints.py): initial commit adding 'target_model_names' support
allow developer to specify all the models they want to call with the file
* feat(files_endpoints.py): return unified files endpoint
* test(test_files_endpoints.py): add validation test - if invalid purpose submitted
* feat: more updates
* feat: initial working commit of unified file id translation
* fix: additional fixes
* fix(router.py): remove model replace logic in jsonl on acreate_file
enables file upload to work for chat completion requests as well
* fix(files_endpoints.py): remove whitespace around model name
* fix(azure/handler.py): return acreate_file with correct response type
* fix: fix linting errors
* test: fix mock test to run on github actions
* fix: fix ruff errors
* fix: fix file too large error
* fix(utils.py): remove redundant var
* test: modify test to work on github actions
* test: update tests
* test: more debug logs to understand ci/cd issue
* test: fix test for respx
* test: skip mock respx test
fails on ci/cd - not clear why
* fix: fix ruff check
* fix: fix test
* fix(model_connection_test.tsx): fix linting error
* test: update unit tests
* test: fix import for test
* fix: fix bad error string
* docs: cleanup files docs
* fix(files/main.py): cleanup error string
* style: initial commit with a provider/config pattern for files api
google ai studio files api onboarding
* fix: test
* feat(gemini/files/transformation.py): support gemini files api response transformation
* fix(gemini/files/transformation.py): return file id as gemini uri
allows id to be passed in to chat completion request, just like openai
* feat(llm_http_handler.py): support async route for files api on llm_http_handler
* fix: fix linting errors
* fix: fix model info check
* fix: fix ruff errors
* fix: fix linting errors
* Revert "fix: fix linting errors"
This reverts commit 926a5a527f.
* fix: fix linting errors
* test: fix test
* test: fix tests
* fix(internal_user_endpoints.py): cleanup unused variables on beta endpoint
no team/org split on daily user endpoint
* build(model_prices_and_context_window.json): gemini-2.0-flash supports audio input
* feat(gemini/transformation.py): support passing audio input to gemini
* test: fix test
* fix(gemini/transformation.py): support audio input as a url
enables passing google cloud bucket urls
* fix(gemini/transformation.py): support explicitly passing format of file
* fix(gemini/transformation.py): expand support for inferred file types from url
* fix(sagemaker/completion/transformation.py): fix special token error when counting sagemaker tokens
* test: fix import
* build(pyproject.toml): add new dev dependencies - for type checking
* build: reformat files to fit black
* ci: reformat to fit black
* ci(test-litellm.yml): make tests run clear
* build(pyproject.toml): add ruff
* fix: fix ruff checks
* build(mypy/): fix mypy linting errors
* fix(hashicorp_secret_manager.py): fix passing cert for tls auth
* build(mypy/): resolve all mypy errors
* test: update test
* fix: fix black formatting
* build(pre-commit-config.yaml): use poetry run black
* fix(proxy_server.py): fix linting error
* fix: fix ruff safe representation error
* fix(anthropic/chat/transformation.py): Don't set tool choice on response_format conversion when thinking is enabled
Not allowed by Anthropic
Fixes https://github.com/BerriAI/litellm/issues/8901
* refactor: move test to base anthropic chat tests
ensures consistent behaviour across vertex/anthropic/bedrock
* fix(anthropic/chat/transformation.py): if thinking token is specified and max tokens is not - ensure max token to anthropic is higher than thinking tokens
* feat(converse_transformation.py): correctly handle thinking + response format on Bedrock Converse
Fixes https://github.com/BerriAI/litellm/issues/8901
* fix(converse_transformation.py): correctly handle adding max tokens
* test: handle service unavailable error
* fix(proxy_server.py): get master key from environment, if not set in general settings or general settings not set at all
* test: mark flaky test
* test(test_proxy_server.py): mock prisma client
* ci: add new github workflow for testing just the mock tests
* fix: fix linting error
* ci(conftest.py): add conftest.py to isolate proxy tests
* build(pyproject.toml): add respx to dev dependencies
* build(pyproject.toml): add prisma to dev dependencies
* test: fix mock prompt management tests to use a mock anthropic key
* ci(test-litellm.yml): parallelize mock testing
make it run faster
* build(pyproject.toml): add hypercorn as dev dep
* build(pyproject.toml): separate proxy vs. core dev dependencies
make it easier for non-proxy contributors to run tests locally - e.g. no need to install hypercorn
* ci(test-litellm.yml): pin python version
* test(test_rerank.py): move test - cannot be mocked, requires aws credentials for e2e testing
* ci: add thank you message to ci
* test: add mock env var to test
* test: add autouse to tests
* test: test mock env vars for e2e tests
* fix: initial commit for adding provider model discovery to gemini
* feat(gemini/): add model discovery for gemini/ route
* docs(set_keys.md): update docs to show you can check available gemini models as well
* feat(anthropic/): add model discovery for anthropic api key
* feat(xai/): add model discovery for XAI
enables checking what models an xai key can call
* ci: bump ci config yml
* fix(topaz/common_utils.py): fix linting error
* fix: fix linting error for python38
* refactor: introduce new transformation config for gpt-4o-transcribe models
* refactor: expose new transformation configs for audio transcription
* ci: fix config yml
* feat(openai/transcriptions): support provider config transformation on openai audio transcriptions
allows gpt-4o and whisper audio transformation to work as expected
* refactor: migrate fireworks ai + deepgram to new transform request pattern
* feat(openai/): working support for gpt-4o-audio-transcribe
* build(model_prices_and_context_window.json): add gpt-4o-transcribe to model cost map
* build(model_prices_and_context_window.json): specify what endpoints are supported for `/audio/transcriptions`
* fix(get_supported_openai_params.py): fix return
* refactor(deepgram/): migrate unit test to deepgram handler
* refactor: cleanup unused imports
* fix(get_supported_openai_params.py): fix linting error
* test: update test
* fix(vertex_and_google_ai_studio_gemini.py): log gemini audio tokens in usage object
enables accurate cost tracking
* refactor(vertex_ai/cost_calculator.py): refactor 128k+ token cost calculation to only run if model info has it
Google has moved away from this for gemini-2.0 models
* refactor(vertex_ai/cost_calculator.py): migrate to usage object for more flexible data passthrough
* fix(llm_cost_calc/utils.py): support audio token cost tracking in generic cost per token
enables vertex ai cost tracking to work with audio tokens
* fix(llm_cost_calc/utils.py): default to total prompt tokens if text tokens field not set
* refactor(llm_cost_calc/utils.py): move openai cost tracking to generic cost per token
more consistent behaviour across providers
* test: add unit test for gemini audio token cost calculation
* ci: bump ci config
* test: fix test