Commit graph

415 commits

Author SHA1 Message Date
Krrish Dholakia
f089b1e23f feat(endpoints.py): support adding credentials by model id
Allows user to reuse existing model credentials
2025-03-14 12:32:32 -07:00
Krrish Dholakia
605a4d1121 feat(endpoints.py): enable retrieving existing credentials by model name
Enables reusing existing credentials
2025-03-14 12:02:50 -07:00
Ishaan Jaff
276a7089df
Merge pull request #9220 from BerriAI/litellm_qa_responses_api
[Fixes] Responses API - allow /responses and subpaths as LLM API route + Add exception mapping for responses API
2025-03-13 21:36:59 -07:00
Ishaan Jaff
7827c275ba exception_type 2025-03-13 20:09:32 -07:00
Sunny Wan
f9a5109203
Merge branch 'BerriAI:main' into main 2025-03-13 19:37:22 -04:00
Ishaan Jaff
15d618f5b1 Add exception mapping for responses API 2025-03-13 15:57:58 -07:00
Ishaan Jaff
1ee6b7852f fix exception_type 2025-03-13 15:33:17 -07:00
Krish Dholakia
cff1c1f7d8
Merge branch 'main' into litellm_dev_03_12_2025_p1 2025-03-12 22:14:02 -07:00
Krrish Dholakia
52926408cd feat(credential_accessor.py): fix upserting new credentials via accessor 2025-03-12 19:03:37 -07:00
Krrish Dholakia
738c0b873d fix(azure_ai/transformation.py): support passing api version to azure ai services endpoint
Fixes https://github.com/BerriAI/litellm/issues/7275
2025-03-12 15:16:42 -07:00
Krish Dholakia
2d957a0ed9
Merge branch 'main' into litellm_dev_03_10_2025_p3 2025-03-12 14:56:01 -07:00
Ishaan Jaff
c2dbcb798f working streaming logging + cost tracking 2025-03-12 07:27:53 -07:00
Ishaan Jaff
46bc76d3e6 _get_assembled_streaming_response 2025-03-12 07:21:03 -07:00
Ishaan Jaff
122c11d346 revert to older logging implementation 2025-03-12 07:14:36 -07:00
Ishaan Jaff
fde75a068a working streaming logging 2025-03-12 00:02:39 -07:00
Ishaan Jaff
51dc24a405 _transform_response_api_usage_to_chat_usage 2025-03-11 22:26:44 -07:00
Ishaan Jaff
24cb83b0e4 Response API cost tracking 2025-03-11 22:02:14 -07:00
Krrish Dholakia
9af73f339a test: fix tests 2025-03-11 17:42:36 -07:00
Krrish Dholakia
152bc67d22 refactor(azure.py): working azure client init on audio speech endpoint 2025-03-11 14:19:45 -07:00
Krrish Dholakia
92881ee79e fix: fix linting error 2025-03-10 21:22:00 -07:00
Krrish Dholakia
f56c5ca380 feat: working e2e credential management - support reusing existing credentials 2025-03-10 19:29:24 -07:00
Krrish Dholakia
fdd5ba3084 feat(credential_accessor.py): support loading in credentials from credential_list
Resolves https://github.com/BerriAI/litellm/issues/9114
2025-03-10 17:15:58 -07:00
Krrish Dholakia
bfbe26b91d feat(azure.py): add azure bad request error support 2025-03-10 15:59:06 -07:00
Krrish Dholakia
5f87dc229a feat(openai.py): bubble all error information back to client 2025-03-10 15:27:43 -07:00
Krish Dholakia
f899b828cf
Support openrouter reasoning_content on streaming (#9094)
* feat(convert_dict_to_response.py): support openrouter format of reasoning content

* fix(transformation.py): fix openrouter streaming with reasoning content

Fixes https://github.com/BerriAI/litellm/issues/8193#issuecomment-270892962

* fix: fix type error
2025-03-09 20:03:59 -07:00
Krish Dholakia
e00d4fb18c
Litellm dev 03 08 2025 p3 (#9089)
* feat(ollama_chat.py): pass down http client to ollama_chat

enables easier testing

* fix(factory.py): fix passing images to ollama's `/api/generate` endpoint

Fixes https://github.com/BerriAI/litellm/issues/6683

* fix(factory.py): fix ollama pt to handle templating correctly
2025-03-09 18:20:56 -07:00
Krish Dholakia
4330ef8e81
Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 42s
* feat(batches/): fix batch cost calculation - ensure it's accurate

use the correct cost value - prev. defaulting to non-batch cost

* feat(batch_utils.py): log batch models to spend logs + standard logging payload

makes it easy to understand how cost was calculated

* fix: fix stored payload for test

* test: fix test
2025-03-08 11:47:25 -08:00
Ishaan Jaff
e2d612efd9
Bug fix - String data: stripped from entire content in streamed Gemini responses (#9070)
* _strip_sse_data_from_chunk

* use _strip_sse_data_from_chunk

* use _strip_sse_data_from_chunk

* use _strip_sse_data_from_chunk

* _strip_sse_data_from_chunk

* test_strip_sse_data_from_chunk

* _strip_sse_data_from_chunk

* testing

* _strip_sse_data_from_chunk
2025-03-07 21:06:39 -08:00
Krish Dholakia
0e3caf92b9
UI - new API Playground for testing LiteLLM translation (#9073)
* feat: initial commit - enable dev to see translated request

* feat(utils.py): expose new endpoint - `/utils/transform_request` to see the raw request sent by litellm

* feat(transform_request.tsx): allow user to see their transformed request

* refactor(litellm_logging.py): return raw request in 3 parts - api_base, headers, request body

easier to render each individually on UI vs. extracting from combined string

* feat: transform_request.tsx

working e2e raw request viewing

* fix(litellm_logging.py): fix transform viewing for bedrock models

* fix(litellm_logging.py): don't return sensitive headers in raw request headers

prevent accidental leak

* feat(transform_request.tsx): style improvements
2025-03-07 19:39:31 -08:00
Ishaan Jaff
b02af305de
[Feat] - Display thinking tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) (#9029)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
* if merge_reasoning_content_in_choices

* _optional_combine_thinking_block_in_choices

* stash changes

* working merge_reasoning_content_in_choices with bedrock

* fix litellm_params accessor

* fix streaming handler

* merge_reasoning_content_in_choices

* _optional_combine_thinking_block_in_choices

* test_bedrock_stream_thinking_content_openwebui

* merge_reasoning_content_in_choices

* fix for _optional_combine_thinking_block_in_choices

* linting error fix
2025-03-06 18:32:58 -08:00
Ishaan Jaff
f47987e673
(Refactor) /v1/messages to follow simpler logic for Anthropic API spec (#9013)
* anthropic_messages_handler v0

* fix /messages

* working messages with router methods

* test_anthropic_messages_handler_litellm_router_non_streaming

* test_anthropic_messages_litellm_router_non_streaming_with_logging

* AnthropicMessagesConfig

* _handle_anthropic_messages_response_logging

* working with /v1/messages endpoint

* working /v1/messages endpoint

* refactor to use router factory function

* use aanthropic_messages

* use BaseConfig for Anthropic /v1/messages

* track api key, team on /v1/messages endpoint

* fix get_logging_payload

* BaseAnthropicMessagesTest

* align test config

* test_anthropic_messages_with_thinking

* test_anthropic_streaming_with_thinking

* fix - display anthropic url for debugging

* test_bad_request_error_handling

* test_anthropic_messages_router_streaming_with_bad_request

* fix ProxyException

* test_bad_request_error_handling_streaming

* use provider_specific_header

* test_anthropic_messages_with_extra_headers

* test_anthropic_messages_to_wildcard_model

* fix gcs pub sub test

* standard_logging_payload

* fix unit testing for anthopic /v1/messages support

* fix pass through anthropic messages api

* delete dead code

* fix anthropic pass through response

* revert change to spend tracking utils

* fix get_litellm_metadata_from_kwargs

* fix spend logs payload json

* proxy_pass_through_endpoint_tests

* TestAnthropicPassthroughBasic

* fix pass through tests

* test_async_vertex_proxy_route_api_key_auth

* _handle_anthropic_messages_response_logging

* vertex_credentials

* test_set_default_vertex_config

* test_anthropic_messages_litellm_router_non_streaming_with_logging

* test_ageneric_api_call_with_fallbacks_basic

* test__aadapter_completion
2025-03-06 00:43:08 -08:00
Krish Dholakia
f6535ae6ad
Support format param for specifying image type (#9019)
* fix(transformation.py): support a 'format' parameter for image's

allow user to specify mime type

* fix: pass mimetype via 'format' param

* feat(gemini/chat/transformation.py): support 'format' param for gemini

* fix(factory.py): support 'format' param on sync bedrock converse calls

* feat(bedrock/converse_transformation.py): support 'format' param for bedrock async calls

* refactor(factory.py): move to supporting 'format' param in base helper

ensures consistency in param support

* feat(gpt_transformation.py): filter out 'format' param

don't send invalid param to openai

* fix(gpt_transformation.py): fix translation

* fix: fix translation error
2025-03-05 19:52:53 -08:00
Krish Dholakia
ec4f665e29
Return signature on anthropic streaming + migrate to signature field instead of signature_delta [MINOR bump] (#9021)
* Fix missing signature_delta in thinking blocks when streaming from Claude 3.7 (#8797)

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* test: update test to enforce signature found

* feat(refactor-signature-param-to-be-'signature'-instead-of-'signature_delta'): keeps it in sync with anthropic

* fix: fix linting error

---------

Co-authored-by: Martin Krasser <krasserm@googlemail.com>
2025-03-05 19:33:54 -08:00
Krish Dholakia
5e386c28b2
Litellm dev 03 04 2025 p3 (#8997)
* fix(core_helpers.py): handle litellm_metadata instead of 'metadata'

* feat(batches/): ensure batches logs are written to db

makes batches response dict compatible

* fix(cost_calculator.py): handle batch response being a dictionary

* fix(batches/main.py): modify retrieve endpoints to use @client decorator

enables logging to work on retrieve call

* fix(batches/main.py): fix retrieve batch response type to be 'dict' compatible

* fix(spend_tracking_utils.py): send unique uuid for retrieve batch call type

create batch and retrieve batch share the same id

* fix(spend_tracking_utils.py): prevent duplicate retrieve batch calls from being double counted

* refactor(batches/): refactor cost tracking for batches - do it on retrieve, and within the established litellm_logging pipeline

ensures cost is always logged to db

* fix: fix linting errors

* fix: fix linting error
2025-03-04 21:58:03 -08:00
Krish Dholakia
662c59adcf
Support caching on reasoning content + other fixes (#8973)
* fix(factory.py): pass on anthropic thinking content from assistant call

* fix(factory.py): fix anthropic messages to handle thinking blocks

Fixes https://github.com/BerriAI/litellm/issues/8961

* fix(factory.py): fix bedrock handling for assistant content in messages

Fixes https://github.com/BerriAI/litellm/issues/8961

* feat(convert_dict_to_response.py): handle reasoning content + thinking blocks in chat completion block

ensures caching works for anthropic thinking block

* fix(convert_dict_to_response.py): pass all message params to delta block

ensures streaming delta also contains the reasoning content / thinking block

* test(test_prompt_factory.py): remove redundant test

anthropic now supports assistant as the first message

* fix(factory.py): fix linting errors

* fix: fix code qa

* test: remove falsy test

* fix(litellm_logging.py): fix str conversion
2025-03-04 21:12:16 -08:00
Sunny Wan
f2c2266fd7
Merge branch 'BerriAI:main' into main 2025-03-03 21:37:43 -05:00
Sunny Wan
bdd03405fe Removed unnecessary comments 2025-03-03 18:18:24 -05:00
Krish Dholakia
94d28d59e4
Fix deepseek 'reasoning_content' error (#8963)
* fix(streaming_handler.py): fix deepseek reasoning content streaming

Fixes https://github.com/BerriAI/litellm/issues/8939

* test(test_streaming_handler.py): add unit test to streaming handle 'is_chunk_non_empty' function

ensures 'reasoning_content' is handled correctly
2025-03-03 14:34:10 -08:00
Sunny Wan
fd090c8043 [FEAT] Added snowflake completion provider 2025-03-03 01:20:00 -05:00
Ishaan Jaff
bc9b3e4847
(Bug fix) - don't log messages in model_parameters in StandardLoggingPayload (#8932)
* define model param helper

* use ModelParamHelper

* get_standard_logging_model_parameters

* fix code quality

* get_standard_logging_model_parameters

* StandardLoggingPayload

* test_get_kwargs_for_cache_key

* test_langsmith_key_based_logging

* fix code qa

* fix linting
2025-03-01 13:39:45 -08:00
Ishaan Jaff
ee7cd60fdb Revert "(bug fix) - don't log messages, prompt, input in model_parameters in StandardLoggingPayload (#8923)"
This reverts commit a119cb420b.
2025-03-01 11:05:33 -08:00
Ishaan Jaff
6fc9aa1612
(bug fix) - dd tracer, only send traces when user opts into sending dd-trace (#8928)
* fix dd tracing null tracer bug

* fix dd tracing

* fix base aws llm

* test_should_use_dd_tracer
2025-03-01 10:53:36 -08:00
Ishaan Jaff
a119cb420b
(bug fix) - don't log messages, prompt, input in model_parameters in StandardLoggingPayload (#8923)
* fix _get_model_parameters

* test litellm logging

* test litellm logging
2025-03-01 10:27:24 -08:00
Ishaan Jaff
3a086cee06
(Feat) - Show Error Logs on LiteLLM UI (#8904)
* fix test_moderations_bad_model

* use async_post_call_failure_hook

* basic logging errors in DB

* show status on ui

* show status on ui

* ui show request / response side by side

* stash fixes

* working, track raw request

* track error info in metadata

* fix showing error / request / response logs

* show traceback on error viewer

* ui with traceback of error

* fix async_post_call_failure_hook

* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads

* test_get_error_information

* fix code quality

* rename proxy track cost callback test

* _should_store_errors_in_spend_logs

* feature flag error logs

* Revert "_should_store_errors_in_spend_logs"

This reverts commit 7f345df477.

* Revert "feature flag error logs"

This reverts commit 0e90c022bb.

* test_spend_logs_payload

* fix OTEL log_db_metrics

* fix import json

* fix ui linting error

* test_async_post_call_failure_hook

* test_chat_completion_bad_model_with_spend_logs

---------

Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2025-02-28 20:10:09 -08:00
Krish Dholakia
c84b489d58
Fix bedrock passing response_format: {"type": "text"} (#8900)
* fix(converse_transformation.py): ignore type: text, value in response_format

no-op for bedrock

* fix(converse_transformation.py): handle adding response format value to tools

* fix(base_invoke_transformation.py): fix 'get_bedrock_invoke_provider' to handle cross-region-inferencing models

* test(test_bedrock_completion.py): add unit testing for bedrock invoke provider logic

* test: update test

* fix(exception_mapping_utils.py): add context window exceeded error handling for databricks provider route

* fix(fireworks_ai/): support passing tools + response_format together

* fix: cleanup

* fix(base_invoke_transformation.py): fix imports
2025-02-28 20:09:59 -08:00
Krish Dholakia
3de4209569
fix caching on main branch (#8858)
* fix(streaming_handler.py): fix is delta empty check to handle empty str

* fix(streaming_handler.py): fix delta chunk on final response
2025-02-26 19:16:34 -08:00
Krish Dholakia
ab7c4d1a0e
Litellm dev bedrock anthropic 3 7 v2 (#8843)
* feat(bedrock/converse/transformation.py): support claude-3-7-sonnet reasoning_Content transformation

Closes https://github.com/BerriAI/litellm/issues/8777

* fix(bedrock/): support returning `reasoning_content` on streaming for claude-3-7

Resolves https://github.com/BerriAI/litellm/issues/8777

* feat(bedrock/): unify converse reasoning content blocks for consistency across anthropic and bedrock

* fix(anthropic/chat/transformation.py): handle deepseek-style 'reasoning_content' extraction within transformation.py

simpler logic

* feat(bedrock/): fix streaming to return blocks in consistent format

* fix: fix linting error

* test: fix test

* feat(factory.py): fix bedrock thinking block translation on tool calling

allows passing the thinking blocks back to bedrock for tool calling

* fix(types/utils.py): don't exclude provider_specific_fields on model dump

ensures consistent responses

* fix: fix linting errors

* fix(convert_dict_to_response.py): pass reasoning_content on root

* fix: test

* fix(streaming_handler.py): add helper util for setting model id

* fix(streaming_handler.py): fix setting model id on model response stream chunk

* fix(streaming_handler.py): fix linting error

* fix(streaming_handler.py): fix linting error

* fix(types/utils.py): add provider_specific_fields to model stream response

* fix(streaming_handler.py): copy provider specific fields and add them to the root of the streaming response

* fix(streaming_handler.py): fix check

* fix: fix test

* fix(types/utils.py): ensure messages content is always openai compatible

* fix(types/utils.py): fix delta object to always be openai compatible

only introduce new params if variable exists

* test: fix bedrock nova tests

* test: skip flaky test

* test: skip flaky test in ci/cd
2025-02-26 16:05:33 -08:00
Krrish Dholakia
fcf4ea3608 build: merge squashed commit
Squashed commit of the following:

commit 6678e15381
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 09:29:15 2025 -0800

    test_prompt_caching

commit bd86e0ac47
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 08:57:16 2025 -0800

    test_prompt_caching

commit 2fc21ad51e
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 08:13:45 2025 -0800

    test_aprompt_caching

commit d94cff55ff
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 08:13:12 2025 -0800

    test_prompt_caching

commit 49c5e7811e
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 07:43:53 2025 -0800

    ui new build

commit cb8d5e5917
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 07:38:56 2025 -0800

    (UI) - Create Key flow for existing users (#8844)

    * working create user button

    * working create user for a key flow

    * allow searching users

    * working create user + key

    * use clear sections on create key

    * better search for users

    * fix create key

    * ui fix create key button - make it neater / cleaner

    * ui fix all keys table

commit 335ba30467
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Wed Feb 26 08:53:17 2025 -0800

    fix: fix file name

commit b8c5b31a4e
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Tue Feb 25 22:54:46 2025 -0800

    fix: fix utils

commit ac6e503461
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Feb 24 10:43:31 2025 -0800

    fix(main.py): fix openai message for assistant msg if role is missing - openai allows this

    Fixes https://github.com/BerriAI/litellm/issues/8661

commit de3989dbc5
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Feb 24 21:19:25 2025 -0800

    fix(get_litellm_params.py): handle no-log being passed in via kwargs

    Fixes https://github.com/BerriAI/litellm/issues/8380
2025-02-26 09:39:27 -08:00
Ishaan Jaff
7021f2f244
(Bug fix) dd-trace used by default on litellm proxy (#8817)
* fix _should_use_dd_tracer

* fix _should_use_dd_tracer

* _should_use_dd_tracer

* _should_use_dd_tracer

* _should_use_dd_tracer

* _init_dd_tracer

* _should_use_dd_tracer

* fix should use dd-tracer

* fix dd tracer
2025-02-25 19:54:22 -08:00
Krish Dholakia
142b195784
Add anthropic thinking + reasoning content support (#8778)
* feat(anthropic/chat/transformation.py): add anthropic thinking param support

* feat(anthropic/chat/transformation.py): support returning thinking content for anthropic on streaming responses

* feat(anthropic/chat/transformation.py): return list of thinking blocks (include block signature)

allows usage in tool call responses

* fix(types/utils.py): extract and map reasoning_content from anthropic as content str

* test: add testing to ensure thinking_blocks are returned at the root

* fix(anthropic/chat/handler.py): return thinking blocks on streaming - include signature

* feat(factory.py): handle anthropic thinking blocks translation if in assistant response

* test: handle openai internal instability

* test: handle openai audio instability

* ci: pin anthropic dep

* test: handle openai audio instability

* fix: fix linting error

* refactor(anthropic/chat/transformation.py): refactor function to remain <50 LOC

* fix: fix linting error

* fix: fix linting error

* fix: fix linting error

* fix: fix linting error
2025-02-24 21:54:30 -08:00