Commit graph

290 commits

Author SHA1 Message Date
Krrish Dholakia
937f0e2ca2 test(test_bedrock_completion.py): ensure model id in model name just works 2025-03-15 14:09:37 -07:00
Krrish Dholakia
5dc46f0cf7 fix(converse_transformation.py): fix encoding model 2025-03-15 14:03:37 -07:00
Krish Dholakia
d4caaae1be
Merge pull request #9274 from BerriAI/litellm_contributor_rebase_branch
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 43s
Helm unit test / unit-test (push) Successful in 50s
Litellm contributor rebase branch
2025-03-14 21:57:49 -07:00
Brian Dev
ff3f79e468 Update test_ollama_pt 2025-03-15 01:07:15 +07:00
Brian Dev
51589364a1 Add new line 2025-03-15 01:03:12 +07:00
Brian Dev
12db28b0af Support 'system' role ollama 2025-03-15 00:55:18 +07:00
Lucas Raschek
88798d4b3d Add unit test 2025-03-14 18:14:41 +01:00
Sunny Wan
f9a5109203
Merge branch 'BerriAI:main' into main 2025-03-13 19:37:22 -04:00
Krish Dholakia
2c011d9a93
Merge pull request #9123 from omrishiv/8911-fix-model-encoding
Fixes bedrock modelId encoding for Inference Profiles
2025-03-13 10:42:32 -07:00
Krish Dholakia
cff1c1f7d8
Merge branch 'main' into litellm_dev_03_12_2025_p1 2025-03-12 22:14:02 -07:00
Krrish Dholakia
3714694c60 fix: fix method signature in test 2025-03-12 15:32:35 -07:00
Krrish Dholakia
738c0b873d fix(azure_ai/transformation.py): support passing api version to azure ai services endpoint
Fixes https://github.com/BerriAI/litellm/issues/7275
2025-03-12 15:16:42 -07:00
Krish Dholakia
2d957a0ed9
Merge branch 'main' into litellm_dev_03_10_2025_p3 2025-03-12 14:56:01 -07:00
Krrish Dholakia
92d85555fe fix(invoke_handler.py): fix converse chunk parsing to only return empty dict on tool use
Fixes https://github.com/BerriAI/litellm/issues/9127
2025-03-11 22:04:17 -07:00
Krrish Dholakia
16224f8db6 fix(o_series_handler.py): handle async calls 2025-03-11 21:22:13 -07:00
Krrish Dholakia
a3a3e6fe13 test: fix test 2025-03-11 18:52:00 -07:00
Krrish Dholakia
9af73f339a test: fix tests 2025-03-11 17:42:36 -07:00
Sunny Wan
844c27a9b2 added mock_tests 2025-03-11 16:32:15 -04:00
omrishiv
cf8084b5f9 fix encoding in tests 2025-03-11 08:57:05 -07:00
Krish Dholakia
86a5926e26
Merge pull request #9113 from BerriAI/litellm_dev_03_10_2025_p2
fix(base_invoke_transformation.py): support extra_headers on bedrock …
2025-03-10 22:19:10 -07:00
Krrish Dholakia
68bd05ac24 fix(base_invoke_transformation.py): support extra_headers on bedrock invoke route
Fixes https://github.com/BerriAI/litellm/issues/9106
2025-03-10 16:13:11 -07:00
Ishaan Jaff
94667e1cf0
Merge pull request #8386 from minwhoo/triton-completions-streaming-fix
Fix triton streaming completions bug
2025-03-10 16:07:19 -07:00
Ishaan Jaff
b02af305de
[Feat] - Display thinking tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) (#9029)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
* if merge_reasoning_content_in_choices

* _optional_combine_thinking_block_in_choices

* stash changes

* working merge_reasoning_content_in_choices with bedrock

* fix litellm_params accessor

* fix streaming handler

* merge_reasoning_content_in_choices

* _optional_combine_thinking_block_in_choices

* test_bedrock_stream_thinking_content_openwebui

* merge_reasoning_content_in_choices

* fix for _optional_combine_thinking_block_in_choices

* linting error fix
2025-03-06 18:32:58 -08:00
Ishaan Jaff
1d2a9e423c test - remove anthropic_adapter tests. no longer used 2025-03-06 06:47:35 -08:00
Ishaan Jaff
29dc67a2aa test fix anthopic completion 2025-03-06 06:42:26 -08:00
Krrish Dholakia
320cb1d51a docs: cleanup 'signature_delta' from docs 2025-03-05 23:53:38 -08:00
Krish Dholakia
744e10b0f0
Litellm dev 03 05 2025 p3 (#9023)
* fix(invoke_handler.py): fix converse streaming - return signature + ensure consistency with anthropic api response

* build(model_prices_and_context_window.json): fix anthropic api claude-3-7 max output tokens

with beta header this is 128k

Resolves https://github.com/BerriAI/litellm/issues/8964

* feat(handler.py): handle new anthropic 'thinking_delta' block on streaming

Fixes https://github.com/BerriAI/litellm/issues/8825
2025-03-05 22:31:39 -08:00
Krish Dholakia
f6535ae6ad
Support format param for specifying image type (#9019)
* fix(transformation.py): support a 'format' parameter for image's

allow user to specify mime type

* fix: pass mimetype via 'format' param

* feat(gemini/chat/transformation.py): support 'format' param for gemini

* fix(factory.py): support 'format' param on sync bedrock converse calls

* feat(bedrock/converse_transformation.py): support 'format' param for bedrock async calls

* refactor(factory.py): move to supporting 'format' param in base helper

ensures consistency in param support

* feat(gpt_transformation.py): filter out 'format' param

don't send invalid param to openai

* fix(gpt_transformation.py): fix translation

* fix: fix translation error
2025-03-05 19:52:53 -08:00
Krish Dholakia
ec4f665e29
Return signature on anthropic streaming + migrate to signature field instead of signature_delta [MINOR bump] (#9021)
* Fix missing signature_delta in thinking blocks when streaming from Claude 3.7 (#8797)

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* test: update test to enforce signature found

* feat(refactor-signature-param-to-be-'signature'-instead-of-'signature_delta'): keeps it in sync with anthropic

* fix: fix linting error

---------

Co-authored-by: Martin Krasser <krasserm@googlemail.com>
2025-03-05 19:33:54 -08:00
Krish Dholakia
f1a44d1fdc
fix(common_utils.py): handle $id in response schema when calling vert… (#8991)
* fix(common_utils.py): handle $id in response schema when calling vertex ai

Fixes issue where `$id` present in response_schema was not accepted by vertex ai

* test(test_vertex.py): add unit test to ensure $id stripped out of vertex schema
2025-03-04 21:19:50 -08:00
Krish Dholakia
662c59adcf
Support caching on reasoning content + other fixes (#8973)
* fix(factory.py): pass on anthropic thinking content from assistant call

* fix(factory.py): fix anthropic messages to handle thinking blocks

Fixes https://github.com/BerriAI/litellm/issues/8961

* fix(factory.py): fix bedrock handling for assistant content in messages

Fixes https://github.com/BerriAI/litellm/issues/8961

* feat(convert_dict_to_response.py): handle reasoning content + thinking blocks in chat completion block

ensures caching works for anthropic thinking block

* fix(convert_dict_to_response.py): pass all message params to delta block

ensures streaming delta also contains the reasoning content / thinking block

* test(test_prompt_factory.py): remove redundant test

anthropic now supports assistant as the first message

* fix(factory.py): fix linting errors

* fix: fix code qa

* test: remove falsy test

* fix(litellm_logging.py): fix str conversion
2025-03-04 21:12:16 -08:00
Sunny Wan
f2c2266fd7
Merge branch 'BerriAI:main' into main 2025-03-03 21:37:43 -05:00
Sunny Wan
c413686ead wrote tests for snowflake 2025-03-03 17:49:11 -05:00
Krish Dholakia
94d28d59e4
Fix deepseek 'reasoning_content' error (#8963)
* fix(streaming_handler.py): fix deepseek reasoning content streaming

Fixes https://github.com/BerriAI/litellm/issues/8939

* test(test_streaming_handler.py): add unit test to streaming handle 'is_chunk_non_empty' function

ensures 'reasoning_content' is handled correctly
2025-03-03 14:34:10 -08:00
Krrish Dholakia
4418e6dd14 build: merge branch 2025-03-02 08:31:57 -08:00
Krish Dholakia
c84b489d58
Fix bedrock passing response_format: {"type": "text"} (#8900)
* fix(converse_transformation.py): ignore type: text, value in response_format

no-op for bedrock

* fix(converse_transformation.py): handle adding response format value to tools

* fix(base_invoke_transformation.py): fix 'get_bedrock_invoke_provider' to handle cross-region-inferencing models

* test(test_bedrock_completion.py): add unit testing for bedrock invoke provider logic

* test: update test

* fix(exception_mapping_utils.py): add context window exceeded error handling for databricks provider route

* fix(fireworks_ai/): support passing tools + response_format together

* fix: cleanup

* fix(base_invoke_transformation.py): fix imports
2025-02-28 20:09:59 -08:00
Krish Dholakia
c8dc4f3eec
converse_transformation: pass 'description' if set in response_format (#8907)
* test(test_bedrock_completion.py): e2e test ensuring tool description is passed in

* fix(converse_transformation.py): pass description, if set

* fix(transformation.py): Fixes https://github.com/BerriAI/litellm/issues/8767#issuecomment-2689887663
2025-02-28 18:47:07 -08:00
Krish Dholakia
a65bfab697
Fix calling claude via invoke route + response_format support for claude on invoke route (#8908)
* fix(anthropic_claude3_transformation.py): fix amazon anthropic claude 3 tool calling transformation on invoke route

move to using anthropic config as base

* fix(utils.py): expose anthropic config via providerconfigmanager

* fix(llm_http_handler.py): support json mode on async completion calls

* fix(invoke_handler/make_call): support json mode for anthropic called via bedrock invoke

* fix(anthropic/): handle 'response_format: {"type": "text"}` + migrate amazon claude 3 invoke config to inherit from anthropic config

Prevents error when passing in 'response_format: {"type": "text"}

* test: fix test

* fix(utils.py): fix base invoke provider check

* fix(anthropic_claude3_transformation.py): don't pass 'stream' param

* fix: fix linting errors

* fix(converse_transformation.py): handle response_format type=text for converse
2025-02-28 17:56:26 -08:00
Ishaan Jaff
6231052b18
[Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860)
* fix deepseek error

* test_deepseek_provider_async_completion

* fix get_complete_url
2025-02-26 21:11:06 -08:00
Krish Dholakia
ab7c4d1a0e
Litellm dev bedrock anthropic 3 7 v2 (#8843)
* feat(bedrock/converse/transformation.py): support claude-3-7-sonnet reasoning_Content transformation

Closes https://github.com/BerriAI/litellm/issues/8777

* fix(bedrock/): support returning `reasoning_content` on streaming for claude-3-7

Resolves https://github.com/BerriAI/litellm/issues/8777

* feat(bedrock/): unify converse reasoning content blocks for consistency across anthropic and bedrock

* fix(anthropic/chat/transformation.py): handle deepseek-style 'reasoning_content' extraction within transformation.py

simpler logic

* feat(bedrock/): fix streaming to return blocks in consistent format

* fix: fix linting error

* test: fix test

* feat(factory.py): fix bedrock thinking block translation on tool calling

allows passing the thinking blocks back to bedrock for tool calling

* fix(types/utils.py): don't exclude provider_specific_fields on model dump

ensures consistent responses

* fix: fix linting errors

* fix(convert_dict_to_response.py): pass reasoning_content on root

* fix: test

* fix(streaming_handler.py): add helper util for setting model id

* fix(streaming_handler.py): fix setting model id on model response stream chunk

* fix(streaming_handler.py): fix linting error

* fix(streaming_handler.py): fix linting error

* fix(types/utils.py): add provider_specific_fields to model stream response

* fix(streaming_handler.py): copy provider specific fields and add them to the root of the streaming response

* fix(streaming_handler.py): fix check

* fix: fix test

* fix(types/utils.py): ensure messages content is always openai compatible

* fix(types/utils.py): fix delta object to always be openai compatible

only introduce new params if variable exists

* test: fix bedrock nova tests

* test: skip flaky test

* test: skip flaky test in ci/cd
2025-02-26 16:05:33 -08:00
Krish Dholakia
017c482d7b
fix(o_series_transformation.py): fix optional param check for o-serie… (#8787)
* fix(o_series_transformation.py): fix optional param check for o-series models

o3-mini and o-1 do not support parallel tool calling

* fix(utils.py): support 'drop_params' for 'thinking' param across models

allows switching to older claude versions (or non-anthropic models) and param to be safely dropped

* fix: fix passing thinking param in optional params

allows dropping thinking_param where not applicable

* test: update old model

* fix(utils.py): fix linting errors

* fix(main.py): add param to acompletion
2025-02-26 12:26:55 -08:00
Ishaan Jaff
da1fd9b25f test_prompt_caching 2025-02-26 09:29:15 -08:00
Ishaan Jaff
4858417283 test_prompt_caching 2025-02-26 08:57:16 -08:00
Ishaan Jaff
2cf66f8267 test_aprompt_caching 2025-02-26 08:13:45 -08:00
Ishaan Jaff
56b2576979 test_prompt_caching 2025-02-26 08:13:12 -08:00
Ishaan Jaff
f9cee4c46b
(Bug Fix) Using LiteLLM Python SDK with model=litellm_proxy/ for embedding, image_generation, transcription, speech, rerank (#8815)
* test_litellm_gateway_from_sdk

* fix embedding check for openai

* test litellm proxy provider

* fix image generation openai compatible models

* fix litellm.transcription

* test_litellm_gateway_from_sdk_rerank

* docs litellm python sdk

* docs litellm python sdk with proxy

* test_litellm_gateway_from_sdk_rerank

* ci/cd run again

* test_litellm_gateway_from_sdk_image_generation

* test_litellm_gateway_from_sdk_embedding

* test_litellm_gateway_from_sdk_embedding
2025-02-25 16:22:37 -08:00
Krrish Dholakia
de8497309b docs(anthropic.md): add claude-3-7-sonnet support
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 15s
2025-02-25 00:06:30 -08:00
Krish Dholakia
142b195784
Add anthropic thinking + reasoning content support (#8778)
* feat(anthropic/chat/transformation.py): add anthropic thinking param support

* feat(anthropic/chat/transformation.py): support returning thinking content for anthropic on streaming responses

* feat(anthropic/chat/transformation.py): return list of thinking blocks (include block signature)

allows usage in tool call responses

* fix(types/utils.py): extract and map reasoning_content from anthropic as content str

* test: add testing to ensure thinking_blocks are returned at the root

* fix(anthropic/chat/handler.py): return thinking blocks on streaming - include signature

* feat(factory.py): handle anthropic thinking blocks translation if in assistant response

* test: handle openai internal instability

* test: handle openai audio instability

* ci: pin anthropic dep

* test: handle openai audio instability

* fix: fix linting error

* refactor(anthropic/chat/transformation.py): refactor function to remain <50 LOC

* fix: fix linting error

* fix: fix linting error

* fix: fix linting error

* fix: fix linting error
2025-02-24 21:54:30 -08:00
Ishaan Jaff
b93889660a
fix: remove aws params from bedrock embedding request body (#8618) (#8696)
* fix: remove aws params from bedrock embedding request body (#8618)

* fix: remove aws params from bedrock embedding request body

* fix-7548: handle aws params in base class

* test: load request data from mock call

* (Infra/DB) - Allow running older litellm version when out of sync with current state of DB  (#8695)

* fix check migration

* clean up should_update_prisma_schema

* update test

* db_migration_disable_update_check

* Check container logs for expected message

* db_migration_disable_update_check

* test_check_migration_out_of_sync

* test_should_update_prisma_schema

* db_migration_disable_update_check

* pip install aiohttp

* ui new build

* delete deprecated code test

* bump: version 1.61.12 → 1.61.13

* Add cost tracking for rerank via bedrock (#8691)

* feat(bedrock/rerank): infer model region if model given as arn

* test: add unit testing to ensure bedrock region name inferred from arn on rerank

* feat(bedrock/rerank/transformation.py): include search units for bedrock rerank result

Resolves https://github.com/BerriAI/litellm/issues/7258#issuecomment-2671557137

* test(test_bedrock_completion.py): add testing for bedrock cohere rerank

* feat(cost_calculator.py): refactor rerank cost tracking to support bedrock cost tracking

* build(model_prices_and_context_window.json): add amazon.rerank model to model cost map

* fix(cost_calculator.py): bedrock/common_utils.py

get base model from model w/ arn -> handles rerank model

* build(model_prices_and_context_window.json): add bedrock cohere rerank pricing

* feat(bedrock/rerank): migrate bedrock config to basererank config

* Revert "feat(bedrock/rerank): migrate bedrock config to basererank config"

This reverts commit 84fae1f167.

* test: add testing to ensure large doc / queries are correctly counted

* Revert "test: add testing to ensure large doc / queries are correctly counted"

This reverts commit 4337f1657e.

* fix(migrate-jina-ai-to-rerank-config): enables cost tracking

* refactor(jina_ai/): finish migrating jina ai to base rerank config

enables cost tracking

* fix(jina_ai/rerank): e2e jina ai rerank cost tracking

* fix: cleanup dead code

* fix: fix python3.8 compatibility error

* test: fix test

* test: add e2e testing for azure ai rerank

* fix: fix linting error

* test: mark cohere as flaky

* add bedrock llama vision support + cohere / infinity rerank - 'return_documents' support  (#8684)

* build(model_prices_and_context_window.json): mark bedrock llama as supporting vision based on docs

* Add price for Cerebras llama3.3-70b (#8676)

* docs(readme.md): fix contributing docs

point people to new mock directory testing structure s/o @vibhavbhat

* build: update contributing readme

* docs(readme.md): improve docs

* docs(readme.md): cleanup readme on tests/

* docs(README.md): cleanup doc

* feat(infinity/): support returning documents when return_documents=True

* test(test_rerank.py): add e2e testing for cohere rerank

* fix: fix linting errors

* fix(together_ai/): fix together ai transformation

* fix: fix linting error

* fix: fix linting errors

* fix: fix linting errors

* test: mark cohere as flaky

* build: fix model supports check

* test: fix test

* test: mark flaky test

* fix: fix test

* test: fix test

---------

Co-authored-by: Yury Koleda <fut.wrk@gmail.com>

* test: fix test

* fix: remove unused import

* bump: version 1.61.13 → 1.61.14

* Correct spelling in user_management_heirarchy.md (#8716)

Fixing irritating typo -- page and image names would also need to be updated

* (Feat) - UI, Allow sorting models by Created_At and all other columns on the UI (#8725)

* order models by created at

* use existing table component on models page

* sorting for created at

* ui clean up models page

* remove provider filter

* fix columns sorting

* decent switching

* ui fix models page

* (UI) Edit Model flow improvements (#8729)

* order models by created at

* use existing table component on models page

* sorting for created at

* ui clean up models page

* remove provider filter

* fix columns sorting

* decent switching

* ui fix models page

* show edit / delete button on root of table

* clean up columns

* working edit model flow

* decent working model edit page

* fix edit model

* show created at and created by

* ui easy model edit flow

* clean up columns

* ui clean up updated at

* fix model datatable

* ui new build

* bump: version 1.61.14 → 1.61.15

* Support arize phoenix on litellm proxy (#7756) (#8715)

* Update opentelemetry.py

wip

* Update test_opentelemetry_unit_tests.py

* fix a few paths and tests

* fix path

* Update litellm_logging.py

* accidentally removed code

* Add type for protocol

* Add and update tests

* minor changes

* update and add additional arize phoenix test

* update existing test

* address feedback

* use standard_logging_object

* address feedback

Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>

* fix(amazon_deepseek_transformation.py): remove </think> from stream o… (#8717)

* fix(amazon_deepseek_transformation.py): remove </think> from stream output - cleanup user facing stream

* fix(key_managenet_endpoints.py): return `/key/list` sorted by created_at

makes it easier to see created key

* style: cleanup team table

* feat(key_edit_view.tsx): support setting model specific tpm/rpm limits on keys

* Add cohere v2/rerank support (#8421) (#8605)

* Add cohere v2/rerank support (#8421)

* Support v2 endpoint cohere rerank

* Add tests and docs

* Make v1 default if old params used

* Update docs

* Update docs pt 2

* Update tests

* Add e2e test

* Clean up code

* Use inheritence for new config

* Fix linting issues (#8608)

* Fix cohere v2 failing test + linting (#8672)

* Fix test and unused imports

* Fix tests

* fix: fix linting errors

* test: handle tgai instability

* fix: skip service unavailable err

* test: print logs for unstable test

* test: skip unreliable tests

---------

Co-authored-by: vibhavbhat <vibhavb00@gmail.com>

* fix(proxy/_types.py): fixes issue where internal user able to escalat… (#8740)

* fix(proxy/_types.py): fixes issue where internal user able to escalate their role with ui key

Fixes https://github.com/BerriAI/litellm/issues/8029

* style: cleanup

* test: handle bedrock instability

---------

Co-authored-by: Madhukar Holla <mholla8@gmail.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Yury Koleda <fut.wrk@gmail.com>
Co-authored-by: Oskar Austegard <oskar@austegard.com>
Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>
Co-authored-by: vibhavbhat <vibhavb00@gmail.com>
2025-02-24 10:04:58 -08:00
Krish Dholakia
566d9354aa
fix(proxy/_types.py): fixes issue where internal user able to escalat… (#8740)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 41s
* fix(proxy/_types.py): fixes issue where internal user able to escalate their role with ui key

Fixes https://github.com/BerriAI/litellm/issues/8029

* style: cleanup

* test: handle bedrock instability
2025-02-22 22:59:58 -08:00