* fix(litellm_logging.py): pass user metadata to langsmith on sdk calls
* fix(litellm_logging.py): pass nested user metadata to logging integration - e.g. langsmith
* fix(exception_mapping_utils.py): catch and clarify watsonx `/text/chat` endpoint not supported error message.
Closes https://github.com/BerriAI/litellm/issues/7213
* fix(watsonx/common_utils.py): accept new 'WATSONX_IAM_URL' env var
allows user to use local watsonx
Fixes https://github.com/BerriAI/litellm/issues/4991
* fix(litellm_logging.py): cleanup unused function
* test: skip bad ibm test
* feat(bedrock/): add bedrock converse top k param
Closes https://github.com/BerriAI/litellm/issues/7087
* Fix bedrock empty content error (#7177)
* add resolver
* handle empty content on bedrock with default content
* use existing default message, tests
* Update tests/llm_translation/test_bedrock_completion.py
* fix tests
* Revert "add resolver"
This reverts commit c717e376ee.
* fallback to empty
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
* fix(factory.py): handle empty content blocks in messages
Fixes https://github.com/BerriAI/litellm/issues/7169
* feat(router.py): add stripped model check to model fallback search
if model_name="openai/gpt-3.5-turbo" and fallback=[{"gpt-3.5-turbo"..}] the fallback should just work as expected
* fix: fix linting error
* fix(factory.py): fix linting error
* fix(factory.py): in base case still support skip empty text blocks
---------
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
* fix(azure/): support passing headers to azure openai endpoints
Fixes https://github.com/BerriAI/litellm/issues/6217
* fix(utils.py): move default tokenizer to just openai
hf tokenizer makes network calls when trying to get the tokenizer - this slows down execution time calls
* fix(router.py): fix pattern matching router - add generic "*" to it as well
Fixes issue where generic "*" model access group wouldn't show up
* fix(pattern_match_deployments.py): match to more specific pattern
match to more specific pattern
allows setting generic wildcard model access group and excluding specific models more easily
* fix(proxy_server.py): fix _delete_deployment to handle base case where db_model list is empty
don't delete all router models b/c of empty list
Fixes https://github.com/BerriAI/litellm/issues/7196
* fix(anthropic/): fix handling response_format for anthropic messages with anthropic api
* fix(fireworks_ai/): support passing response_format + tool call in same message
Addresses https://github.com/BerriAI/litellm/issues/7135
* Revert "fix(fireworks_ai/): support passing response_format + tool call in same message"
This reverts commit 6a30dc6929.
* test: fix test
* fix(replicate/): fix replicate default retry/polling logic
* test: add unit testing for router pattern matching
* test: update test to use default oai tokenizer
* test: mark flaky test
* test: skip flaky test
* add unit test for test_datadog_static_methods
* docs dd vars
* test_datadog_payload_environment_variables
* test_datadog_static_methods
* docs env vars
* fix table
* fix(acompletion): support fallbacks on acompletion
allows health checks for wildcard routes to use fallback models
* test: update cohere generate api testing
* add max tokens to health check (#7000)
* fix: fix health check test
* test: update testing
---------
Co-authored-by: Cameron <561860+wallies@users.noreply.github.com>
* refactor(fireworks_ai/): inherit from openai like base config
refactors fireworks ai to use a common config
* test: fix import in test
* refactor(watsonx/): refactor watsonx to use llm base config
refactors chat + completion routes to base config path
* fix: fix linting error
* refactor: inherit base llm config for oai compatible routes
* test: fix test
* test: fix test
* refactor(fireworks_ai/): inherit from openai like base config
refactors fireworks ai to use a common config
* test: fix import in test
* refactor(watsonx/): refactor watsonx to use llm base config
refactors chat + completion routes to base config path
* fix: fix linting error
* test: fix test
* fix: fix test
* fix use new format for Cohere config
* fix base llm http handler
* Litellm code qa common config (#7116)
* feat(base_llm): initial commit for common base config class
Addresses code qa critique https://github.com/andrewyng/aisuite/issues/113#issuecomment-2512369132
* feat(base_llm/): add transform request/response abstract methods to base config class
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* use base transform helpers
* use base_llm_http_handler for cohere
* working cohere using base llm handler
* add async cohere chat completion support on base handler
* fix completion code
* working sync cohere stream
* add async support cohere_chat
* fix types get_model_response_iterator
* async / sync tests cohere
* feat cohere using base llm class
* fix linting errors
* fix _abc error
* add cohere params to transformation
* remove old cohere file
* fix type error
* fix merge conflicts
* fix cohere merge conflicts
* fix linting error
* fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post
* fix passing cohere specific params
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* feat(base_llm): initial commit for common base config class
Addresses code qa critique https://github.com/andrewyng/aisuite/issues/113#issuecomment-2512369132
* feat(base_llm/): add transform request/response abstract methods to base config class
* feat(cohere-+-clarifai): refactor integrations to use common base config class
* fix: fix linting errors
* refactor(anthropic/): move anthropic + vertex anthropic to use base config
* test: fix xai test
* test: fix tests
* fix: fix linting errors
* test: comment out WIP test
* fix(transformation.py): fix is pdf used check
* fix: fix linting error
* fix(main.py): support passing max retries to azure/openai embedding integrations
Fixes https://github.com/BerriAI/litellm/issues/7003
* feat(team_endpoints.py): allow updating team model aliases
Closes https://github.com/BerriAI/litellm/issues/6956
* feat(router.py): allow specifying model id as fallback - skips any cooldown check
Allows a default model to be checked if all models in cooldown
s/o @micahjsmith
* docs(reliability.md): add fallback to specific model to docs
* fix(utils.py): new 'is_prompt_caching_valid_prompt' helper util
Allows user to identify if messages/tools have prompt caching
Related issue: https://github.com/BerriAI/litellm/issues/6784
* feat(router.py): store model id for prompt caching valid prompt
Allows routing to that model id on subsequent requests
* fix(router.py): only cache if prompt is valid prompt caching prompt
prevents storing unnecessary items in cache
* feat(router.py): support routing prompt caching enabled models to previous deployments
Closes https://github.com/BerriAI/litellm/issues/6784
* test: fix linting errors
* feat(databricks/): convert basemodel to dict and exclude none values
allow passing pydantic message to databricks
* fix(utils.py): ensure all chat completion messages are dict
* (feat) Track `custom_llm_provider` in LiteLLMSpendLogs (#7081)
* add custom_llm_provider to SpendLogsPayload
* add custom_llm_provider to SpendLogs
* add custom llm provider to SpendLogs payload
* test_spend_logs_payload
* Add MLflow to the side bar (#7031)
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
* (bug fix) SpendLogs update DB catch all possible DB errors for retrying (#7082)
* catch DB_CONNECTION_ERROR_TYPES
* fix DB retry mechanism for SpendLog updates
* use DB_CONNECTION_ERROR_TYPES in auth checks
* fix exp back off for writing SpendLogs
* use _raise_failed_update_spend_exception to ensure errors print as NON blocking
* test_update_spend_logs_multiple_batches_with_failure
* (Feat) Add StructuredOutputs support for Fireworks.AI (#7085)
* fix model cost map fireworks ai "supports_response_schema": true,
* fix supports_response_schema
* fix map openai params fireworks ai
* test_map_response_format
* test_map_response_format
* added deepinfra/Meta-Llama-3.1-405B-Instruct (#7084)
* bump: version 1.53.9 → 1.54.0
* fix deepinfra
* litellm db fixes LiteLLM_UserTable (#7089)
* ci/cd queue new release
* fix llama-3.3-70b-versatile
* refactor - use consistent file naming convention `AI21/` -> `ai21` (#7090)
* fix refactor - use consistent file naming convention
* ci/cd run again
* fix naming structure
* fix use consistent naming (#7092)
---------
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
Co-authored-by: ali sayyah <ali.sayyah2@gmail.com>
* catch DB_CONNECTION_ERROR_TYPES
* fix DB retry mechanism for SpendLog updates
* use DB_CONNECTION_ERROR_TYPES in auth checks
* fix exp back off for writing SpendLogs
* use _raise_failed_update_spend_exception to ensure errors print as NON blocking
* test_update_spend_logs_multiple_batches_with_failure
* feat(langfuse/): support langfuse prompt management
Initial working commit for langfuse prompt management support
Closes https://github.com/BerriAI/litellm/issues/6269
* test: update test
* fix(litellm_logging.py): suppress linting error
* fix(edit_budget_modal.tsx): call `/budget/update` endpoint instead of `/budget/new`
allows updating existing budget on ui
* fix(user_api_key_auth.py): support cost tracking for end user via jwt field
* fix(presidio.py): support pii masking on sync logging callbacks
enables masking before logging to langfuse
* feat(utils.py): support retry policy logic inside '.completion()'
Fixes https://github.com/BerriAI/litellm/issues/6623
* fix(utils.py): support retry by retry policy on async logic as well
* fix(handle_jwt.py): set leeway default leeway value
* test: fix test to handle jwt audience claim
* fix(cost_calculator.py): move to using `.get_model_info()` for cost per token calculations
ensures cost tracking is reliable - handles edge cases of parsing model cost map
* build(model_prices_and_context_window.json): add 'supports_response_schema' for select tgai models
Fixes https://github.com/BerriAI/litellm/pull/7037#discussion_r1872157329
* build(model_prices_and_context_window.json): remove 'pdf input' and 'vision' support from nova micro in model map
Bedrock docs indicate no support for micro - https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html
* fix(converse_transformation.py): support amazon nova tool use
* fix(opentelemetry): Add missing LLM request type attribute to spans (#7041)
* feat(opentelemetry): add LLM request type attribute to spans
* lint
* fix: curl usage (#7038)
curl -d, --data <data> is lowercase d
curl -D, --dump-header <filename> is uppercase D
references:
https://curl.se/docs/manpage.html#-dhttps://curl.se/docs/manpage.html#-D
* fix(spend_tracking.py): handle empty 'id' in model response - when creating spend log
Fixes https://github.com/BerriAI/litellm/issues/7023
* fix(streaming_chunk_builder.py): handle initial id being empty string
Fixes https://github.com/BerriAI/litellm/issues/7023
* fix(anthropic_passthrough_logging_handler.py): add end user cost tracking for anthropic pass through endpoint
* docs(pass_through/): refactor docs location + add table on supported features for pass through endpoints
* feat(anthropic_passthrough_logging_handler.py): support end user cost tracking via anthropic sdk
* docs(anthropic_completion.md): add docs on passing end user param for cost tracking on anthropic sdk
* fix(litellm_logging.py): use standard logging payload if present in kwargs
prevent datadog logging error for pass through endpoints
* docs(bedrock.md): add rerank api usage example to docs
* bugfix/change dummy tool name format (#7053)
* fix viewing keys (#7042)
* ui new build
* build(model_prices_and_context_window.json): add bedrock region models to model cost map (#7044)
* bye (#6982)
* (fix) litellm router.aspeech (#6962)
* doc Migrating Databases
* fix aspeech on router
* test_audio_speech_router
* test_audio_speech_router
* docs show supported providers on batches api doc
* change dummy tool name format
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix: fix linting errors
* test: update test
* fix(litellm_logging.py): fix pass through check
* fix(test_otel_logging.py): fix test
* fix(cost_calculator.py): update handling for cost per second
* fix(cost_calculator.py): fix cost check
* test: fix test
* (fix) adding public routes when using custom header (#7045)
* get_api_key_from_custom_header
* add test_get_api_key_from_custom_header
* fix testing use 1 file for test user api key auth
* fix test user api key auth
* test_custom_api_key_header_name
* build: update ui build
---------
Co-authored-by: Doron Kopit <83537683+doronkopit5@users.noreply.github.com>
Co-authored-by: lloydchang <lloydchang@gmail.com>
Co-authored-by: hgulersen <haymigulersen@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>