* Add date picker to usage tab + Add reasoning_content token tracking across all providers on streaming (#9722)
* feat(new_usage.tsx): add date picker for new usage tab
allow user to look back on their usage data
* feat(anthropic/chat/transformation.py): report reasoning tokens in completion token details
allows usage tracking on how many reasoning tokens are actually being used
* feat(streaming_chunk_builder.py): return reasoning_tokens in anthropic/openai streaming response
allows tracking reasoning_token usage across providers
* Fix update team metadata + fix bulk adding models on Ui (#9721)
* fix(handle_add_model_submit.tsx): fix bulk adding models
* fix(team_info.tsx): fix team metadata update
Fixes https://github.com/BerriAI/litellm/issues/9689
* (v0) Unified file id - allow calling multiple providers with same file id (#9718)
* feat(files_endpoints.py): initial commit adding 'target_model_names' support
allow developer to specify all the models they want to call with the file
* feat(files_endpoints.py): return unified files endpoint
* test(test_files_endpoints.py): add validation test - if invalid purpose submitted
* feat: more updates
* feat: initial working commit of unified file id translation
* fix: additional fixes
* fix(router.py): remove model replace logic in jsonl on acreate_file
enables file upload to work for chat completion requests as well
* fix(files_endpoints.py): remove whitespace around model name
* fix(azure/handler.py): return acreate_file with correct response type
* fix: fix linting errors
* test: fix mock test to run on github actions
* fix: fix ruff errors
* fix: fix file too large error
* fix(utils.py): remove redundant var
* test: modify test to work on github actions
* test: update tests
* test: more debug logs to understand ci/cd issue
* test: fix test for respx
* test: skip mock respx test
fails on ci/cd - not clear why
* fix: fix ruff check
* fix: fix test
* fix(model_connection_test.tsx): fix linting error
* test: update unit tests
* fix(anthropic/chat/transformation.py): Don't set tool choice on response_format conversion when thinking is enabled
Not allowed by Anthropic
Fixes https://github.com/BerriAI/litellm/issues/8901
* refactor: move test to base anthropic chat tests
ensures consistent behaviour across vertex/anthropic/bedrock
* fix(anthropic/chat/transformation.py): if thinking token is specified and max tokens is not - ensure max token to anthropic is higher than thinking tokens
* feat(converse_transformation.py): correctly handle thinking + response format on Bedrock Converse
Fixes https://github.com/BerriAI/litellm/issues/8901
* fix(converse_transformation.py): correctly handle adding max tokens
* test: handle service unavailable error
* fix(invoke_handler.py): fix converse streaming - return signature + ensure consistency with anthropic api response
* build(model_prices_and_context_window.json): fix anthropic api claude-3-7 max output tokens
with beta header this is 128k
Resolves https://github.com/BerriAI/litellm/issues/8964
* feat(handler.py): handle new anthropic 'thinking_delta' block on streaming
Fixes https://github.com/BerriAI/litellm/issues/8825
* Fix missing signature_delta in thinking blocks when streaming from Claude 3.7 (#8797)
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
* test: update test to enforce signature found
* feat(refactor-signature-param-to-be-'signature'-instead-of-'signature_delta'): keeps it in sync with anthropic
* fix: fix linting error
---------
Co-authored-by: Martin Krasser <krasserm@googlemail.com>
* feat(bedrock/converse/transformation.py): support claude-3-7-sonnet reasoning_Content transformation
Closes https://github.com/BerriAI/litellm/issues/8777
* fix(bedrock/): support returning `reasoning_content` on streaming for claude-3-7
Resolves https://github.com/BerriAI/litellm/issues/8777
* feat(bedrock/): unify converse reasoning content blocks for consistency across anthropic and bedrock
* fix(anthropic/chat/transformation.py): handle deepseek-style 'reasoning_content' extraction within transformation.py
simpler logic
* feat(bedrock/): fix streaming to return blocks in consistent format
* fix: fix linting error
* test: fix test
* feat(factory.py): fix bedrock thinking block translation on tool calling
allows passing the thinking blocks back to bedrock for tool calling
* fix(types/utils.py): don't exclude provider_specific_fields on model dump
ensures consistent responses
* fix: fix linting errors
* fix(convert_dict_to_response.py): pass reasoning_content on root
* fix: test
* fix(streaming_handler.py): add helper util for setting model id
* fix(streaming_handler.py): fix setting model id on model response stream chunk
* fix(streaming_handler.py): fix linting error
* fix(streaming_handler.py): fix linting error
* fix(types/utils.py): add provider_specific_fields to model stream response
* fix(streaming_handler.py): copy provider specific fields and add them to the root of the streaming response
* fix(streaming_handler.py): fix check
* fix: fix test
* fix(types/utils.py): ensure messages content is always openai compatible
* fix(types/utils.py): fix delta object to always be openai compatible
only introduce new params if variable exists
* test: fix bedrock nova tests
* test: skip flaky test
* test: skip flaky test in ci/cd
* fix(o_series_transformation.py): fix optional param check for o-series models
o3-mini and o-1 do not support parallel tool calling
* fix(utils.py): support 'drop_params' for 'thinking' param across models
allows switching to older claude versions (or non-anthropic models) and param to be safely dropped
* fix: fix passing thinking param in optional params
allows dropping thinking_param where not applicable
* test: update old model
* fix(utils.py): fix linting errors
* fix(main.py): add param to acompletion
* build(ui/): update ui
* fix: drop unsupported non-whitespace characters for real when calling… (#7484)
* fix: drop unsupported non-whitespace characters for real when calling anthropic with stop sequences
* test: add parameterized test for _map_stop_sequences method in AnthropicConfig
---------
Co-authored-by: Wolfram Ravenwolf <52386626+WolframRavenwolf@users.noreply.github.com>
* fix(azure/): support passing headers to azure openai endpoints
Fixes https://github.com/BerriAI/litellm/issues/6217
* fix(utils.py): move default tokenizer to just openai
hf tokenizer makes network calls when trying to get the tokenizer - this slows down execution time calls
* fix(router.py): fix pattern matching router - add generic "*" to it as well
Fixes issue where generic "*" model access group wouldn't show up
* fix(pattern_match_deployments.py): match to more specific pattern
match to more specific pattern
allows setting generic wildcard model access group and excluding specific models more easily
* fix(proxy_server.py): fix _delete_deployment to handle base case where db_model list is empty
don't delete all router models b/c of empty list
Fixes https://github.com/BerriAI/litellm/issues/7196
* fix(anthropic/): fix handling response_format for anthropic messages with anthropic api
* fix(fireworks_ai/): support passing response_format + tool call in same message
Addresses https://github.com/BerriAI/litellm/issues/7135
* Revert "fix(fireworks_ai/): support passing response_format + tool call in same message"
This reverts commit 6a30dc6929.
* test: fix test
* fix(replicate/): fix replicate default retry/polling logic
* test: add unit testing for router pattern matching
* test: update test to use default oai tokenizer
* test: mark flaky test
* test: skip flaky test
* fix(together_ai/chat): only return response_format + tools for supported models
Fixes https://github.com/BerriAI/litellm/issues/6972
* feat(bedrock/rerank): initial working commit for bedrock rerank api support
Closes https://github.com/BerriAI/litellm/issues/7021
* feat(bedrock/rerank): async bedrock rerank api support
Addresses https://github.com/BerriAI/litellm/issues/7021
* build(model_prices_and_context_window.json): add 'supports_prompt_caching' for bedrock models + cleanup cross-region from model list (duplicate information - lead to inconsistencies )
* docs(json_mode.md): clarify model support for json schema
Closes https://github.com/BerriAI/litellm/issues/6998
* fix(_service_logger.py): handle dd callback in list
ensure failed spend tracking is logged to datadog
* feat(converse_transformation.py): translate from anthropic format to bedrock format
Closes https://github.com/BerriAI/litellm/issues/7030
* fix: fix linting errors
* test: fix test
* fix(key_management_endpoints.py): override metadata field value on update
allow user to override tags
* feat(__init__.py): expose new disable_end_user_cost_tracking_prometheus_only metric
allow disabling end user cost tracking on prometheus - fixes cardinality issue
* fix(litellm_pre_call_utils.py): add key/team level enforced params
Fixes https://github.com/BerriAI/litellm/issues/6652
* fix(key_management_endpoints.py): allow user to pass in `enforced_params` as a top level param on /key/generate and /key/update
* docs(enterprise.md): add docs on enforcing required params for llm requests
* Add support of Galadriel API (#7005)
* fix(router.py): robust retry after handling
set retry after time to 0 if >0 healthy deployments. handle base case = 1 deployment
* test(test_router.py): fix test
* feat(bedrock/): add support for 'nova' models
also adds explicit 'converse/' route for simpler routing
* fix: fix 'supports_pdf_input'
return if model supports pdf input on get_model_info
* feat(converse_transformation.py): support bedrock pdf input
* docs(document_understanding.md): add document understanding to docs
* fix(litellm_pre_call_utils.py): fix linting error
* fix(init.py): fix passing of bedrock converse models
* feat(bedrock/converse): support 'response_format={"type": "json_object"}'
* fix(converse_handler.py): fix linting error
* fix(base_llm_unit_tests.py): fix test
* fix: fix test
* test: fix test
* test: fix test
* test: remove duplicate test
---------
Co-authored-by: h4n0 <4738254+h4n0@users.noreply.github.com>
* fix(factory.py): ensure tool call converts image url
Fixes https://github.com/BerriAI/litellm/issues/6953
* fix(transformation.py): support mp4 + pdf url's for vertex ai
Fixes https://github.com/BerriAI/litellm/issues/6936
* fix(http_handler.py): mask gemini api key in error logs
Fixes https://github.com/BerriAI/litellm/issues/6963
* docs(prometheus.md): update prometheus FAQs
* feat(auth_checks.py): ensure specific model access > wildcard model access
if wildcard model is in access group, but specific model is not - deny access
* fix(auth_checks.py): handle auth checks for team based model access groups
handles scenario where model access group used for wildcard models
* fix(internal_user_endpoints.py): support adding guardrails on `/user/update`
Fixes https://github.com/BerriAI/litellm/issues/6942
* fix(key_management_endpoints.py): fix prepare_metadata_fields helper
* fix: fix tests
* build(requirements.txt): bump openai dep version
fixes proxies argument
* test: fix tests
* fix(http_handler.py): fix error message masking
* fix(bedrock_guardrails.py): pass in prepped data
* test: fix test
* test: fix nvidia nim test
* fix(http_handler.py): return original response headers
* fix: revert maskedhttpstatuserror
* test: update tests
* test: cleanup test
* fix(key_management_endpoints.py): fix metadata field update logic
* fix(key_management_endpoints.py): maintain initial order of guardrails in key update
* fix(key_management_endpoints.py): handle prepare metadata
* fix: fix linting errors
* fix: fix linting errors
* fix: fix linting errors
* fix: fix key management errors
* fix(key_management_endpoints.py): update metadata
* test: update test
* refactor: add more debug statements
* test: skip flaky test
* test: fix test
* fix: fix test
* fix: fix update metadata logic
* fix: fix test
* ci(config.yml): change db url for e2e ui testing
* Fix Vertex AI function calling invoke: use JSON format instead of protobuf text format. (#6702)
* test: test tool_call conversion when arguments is empty dict
Fixes https://github.com/BerriAI/litellm/issues/6833
* fix(openai_like/handler.py): return more descriptive error message
Fixes https://github.com/BerriAI/litellm/issues/6812
* test: skip overloaded model
* docs(anthropic.md): update anthropic docs to show how to route to any new model
* feat(groq/): fake stream when 'response_format' param is passed
Groq doesn't support streaming when response_format is set
* feat(groq/): add response_format support for groq
Closes https://github.com/BerriAI/litellm/issues/6845
* fix(o1_handler.py): remove fake streaming for o1
Closes https://github.com/BerriAI/litellm/issues/6801
* build(model_prices_and_context_window.json): add groq llama3.2b model pricing
Closes https://github.com/BerriAI/litellm/issues/6807
* fix(utils.py): fix handling ollama response format param
Fixes https://github.com/BerriAI/litellm/issues/6848#issuecomment-2491215485
* docs(sidebars.js): refactor chat endpoint placement
* fix: fix linting errors
* test: fix test
* test: fix test
* fix(openai_like/handler): handle max retries
* fix(streaming_handler.py): fix streaming check for openai-compatible providers
* test: update test
* test: correctly handle model is overloaded error
* test: update test
* test: fix test
* test: mark flaky test
---------
Co-authored-by: Guowang Li <Guowang@users.noreply.github.com>
* fix(anthropic/chat/transformation.py): add json schema as values: json_schema
fixes passing pydantic obj to anthropic
Fixes https://github.com/BerriAI/litellm/issues/6766
* (feat): Add timestamp_granularities parameter to transcription API (#6457)
* Add timestamp_granularities parameter to transcription API
* add param to the local test
* fix(databricks/chat.py): handle max_retries optional param handling for openai-like calls
Fixes issue with calling finetuned vertex ai models via databricks route
* build(ui/): add team admins via proxy ui
* fix: fix linting error
* test: fix test
* docs(vertex.md): refactor docs
* test: handle overloaded anthropic model error
* test: remove duplicate test
* test: fix test
* test: update test to handle model overloaded error
---------
Co-authored-by: Show <35062952+BrunooShow@users.noreply.github.com>
* fix(__init__.py): add 'watsonx_text' as mapped llm api route
Fixes https://github.com/BerriAI/litellm/issues/6663
* fix(opentelemetry.py): fix passing parallel tool calls to otel
Fixes https://github.com/BerriAI/litellm/issues/6677
* refactor(test_opentelemetry_unit_tests.py): create a base set of unit tests for all logging integrations - test for parallel tool call handling
reduces bugs in repo
* fix(__init__.py): update provider-model mapping to include all known provider-model mappings
Fixes https://github.com/BerriAI/litellm/issues/6669
* feat(anthropic): support passing document in llm api call
* docs(anthropic.md): add pdf anthropic call to docs + expose new 'supports_pdf_input' function
* fix(factory.py): fix linting error
* fix(pattern_match_deployments.py): default to user input if unable to map based on wildcards
* test: fix test
* test: reset test name
* test: update conftest to reload proxy server module between tests
* ci(config.yml): move langfuse out of local_testing
reduce ci/cd time
* ci(config.yml): cleanup langfuse ci/cd tests
* fix: update test to not use global proxy_server app module
* ci: move caching to a separate test pipeline
speed up ci pipeline
* test: update conftest to check if proxy_server attr exists before reloading
* build(conftest.py): don't block on inability to reload proxy_server
* ci(config.yml): update caching unit test filter to work on 'cache' keyword as well
* fix(encrypt_decrypt_utils.py): use function to get salt key
* test: mark flaky test
* test: handle anthropic overloaded errors
* refactor: create separate ci/cd pipeline for proxy unit tests
make ci/cd faster
* ci(config.yml): add litellm_proxy_unit_testing to build_and_test jobs
* ci(config.yml): generate prisma binaries for proxy unit tests
* test: readd vertex_key.json
* ci(config.yml): remove `-s` from proxy_unit_test cmd
speed up test
* ci: remove any 'debug' logging flag
speed up ci pipeline
* test: fix test
* test(test_braintrust.py): rerun
* test: add delay for braintrust test
* fix(anthropic/chat/transformation.py): support anthropic disable_parallel_tool_use param
Fixes https://github.com/BerriAI/litellm/issues/6456
* feat(anthropic/chat/transformation.py): support anthropic computer tool use
Closes https://github.com/BerriAI/litellm/issues/6427
* fix(vertex_ai/common_utils.py): parse out '$schema' when calling vertex ai
Fixes issue when trying to call vertex from vercel sdk
* fix(main.py): add 'extra_headers' support for azure on all translation endpoints
Fixes https://github.com/BerriAI/litellm/issues/6465
* fix: fix linting errors
* fix(transformation.py): handle no beta headers for anthropic
* test: cleanup test
* fix: fix linting error
* fix: fix linting errors
* fix: fix linting errors
* fix(transformation.py): handle dummy tool call
* fix(main.py): fix linting error
* fix(azure.py): pass required param
* LiteLLM Minor Fixes & Improvements (10/24/2024) (#6441)
* fix(azure.py): handle /openai/deployment in azure api base
* fix(factory.py): fix faulty anthropic tool result translation check
Fixes https://github.com/BerriAI/litellm/issues/6422
* fix(gpt_transformation.py): add support for parallel_tool_calls to azure
Fixes https://github.com/BerriAI/litellm/issues/6440
* fix(factory.py): support anthropic prompt caching for tool results
* fix(vertex_ai/common_utils): don't pop non-null required field
Fixes https://github.com/BerriAI/litellm/issues/6426
* feat(vertex_ai.py): support code_execution tool call for vertex ai + gemini
Closes https://github.com/BerriAI/litellm/issues/6434
* build(model_prices_and_context_window.json): Add 'supports_assistant_prefill' for bedrock claude-3-5-sonnet v2 models
Closes https://github.com/BerriAI/litellm/issues/6437
* fix(types/utils.py): fix linting
* test: update test to include required fields
* test: fix test
* test: handle flaky test
* test: remove e2e test - hitting gemini rate limits
* Litellm dev 10 26 2024 (#6472)
* docs(exception_mapping.md): add missing exception types
Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183
* fix(main.py): register custom model pricing with specific key
Ensure custom model pricing is registered to the specific model+provider key combination
* test: make testing more robust for custom pricing
* fix(redis_cache.py): instrument otel logging for sync redis calls
ensures complete coverage for all redis cache calls
* (Testing) Add unit testing for DualCache - ensure in memory cache is used when expected (#6471)
* test test_dual_cache_get_set
* unit testing for dual cache
* fix async_set_cache_sadd
* test_dual_cache_local_only
* redis otel tracing + async support for latency routing (#6452)
* docs(exception_mapping.md): add missing exception types
Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183
* fix(main.py): register custom model pricing with specific key
Ensure custom model pricing is registered to the specific model+provider key combination
* test: make testing more robust for custom pricing
* fix(redis_cache.py): instrument otel logging for sync redis calls
ensures complete coverage for all redis cache calls
* refactor: pass parent_otel_span for redis caching calls in router
allows for more observability into what calls are causing latency issues
* test: update tests with new params
* refactor: ensure e2e otel tracing for router
* refactor(router.py): add more otel tracing acrosss router
catch all latency issues for router requests
* fix: fix linting error
* fix(router.py): fix linting error
* fix: fix test
* test: fix tests
* fix(dual_cache.py): pass ttl to redis cache
* fix: fix param
* fix(dual_cache.py): set default value for parent_otel_span
* fix(transformation.py): support 'response_format' for anthropic calls
* fix(transformation.py): check for cache_control inside 'function' block
* fix: fix linting error
* fix: fix linting errors
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix get_response_headers
* unit testing for get headers
* unit testing for anthropic / azure openai headers
* increase test coverage for test_completion_response_ratelimit_headers
* fix test rate limit headers