Commit graph

680 commits

Author SHA1 Message Date
Ishaan Jaff
79ef184345 run ci/cd again 2025-03-25 21:57:45 -07:00
Ishaan Jaff
fca5926600 default to use SLP for GCS PubSub 2025-03-24 15:21:59 -07:00
Ishaan Jaff
1d7accce9e test_supports_web_search 2025-03-22 13:49:35 -07:00
Ishaan Jaff
075f3537f6 bump version 2025-03-21 21:03:42 -07:00
Sunny Wan
c942f4cd86 Merge branch 'main' of https://github.com/SunnyWan59/litellm 2025-03-13 19:42:25 -04:00
Sunny Wan
70770b6aa4 Removed unnecessary code and refactored 2025-03-13 19:42:10 -04:00
Sunny Wan
f9a5109203
Merge branch 'BerriAI:main' into main 2025-03-13 19:37:22 -04:00
Cole McIntosh
501014414b chore(init): update Azure default API version to 2025-02-01-preview 2025-03-12 22:02:48 -06:00
Cole McIntosh
0ea102f9bb chore(init): update Azure default API version to 2024-12-01-preview 2025-03-12 21:33:49 -06:00
Krish Dholakia
2d957a0ed9
Merge branch 'main' into litellm_dev_03_10_2025_p3 2025-03-12 14:56:01 -07:00
Krrish Dholakia
7a8165eaba fix(llm_caching_handler.py): Add event loop to llm client cache info
Fixes https://github.com/BerriAI/litellm/issues/7667
2025-03-12 12:24:24 -07:00
Ishaan Jaff
342741ede1 Merge branch 'main' into litellm_responses_api_support 2025-03-12 12:04:12 -07:00
Ishaan Jaff
368f1de2e1 add OpenAIResponsesAPIConfig 2025-03-11 15:10:34 -07:00
Ishaan Jaff
d6c82327e6 working import litellm.responses 2025-03-11 14:32:32 -07:00
Sunny Wan
1dabc62d7b removed hardcoding and added models to model_prices 2025-03-11 02:05:02 -04:00
Krrish Dholakia
f1cdc26967 feat(endpoints.py): initial set of crud endpoints for reusable credentials on proxy 2025-03-10 17:48:02 -07:00
Krrish Dholakia
4bd4bb16fd feat(proxy_server.py): move credential list to being a top-level param 2025-03-10 17:04:05 -07:00
omrishiv
0674491386
add support for Amazon Nova Canvas model (#7838)
* add initial support for Amazon Nova Canvas model

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* adjust name to AmazonNovaCanvas and map function variables to config

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* tighten model name check

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* fix quality mapping

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add premium quality in config

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* support all Amazon Nova Canvas tasks

* remove unused import

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add tests for image generation tasks and fix payload

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add missing util file

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* update model prices backup file

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* remove image tasks other than text->image

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

---------

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
2025-03-10 08:02:00 -07:00
Ishaan Jaff
b02af305de
[Feat] - Display thinking tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) (#9029)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
* if merge_reasoning_content_in_choices

* _optional_combine_thinking_block_in_choices

* stash changes

* working merge_reasoning_content_in_choices with bedrock

* fix litellm_params accessor

* fix streaming handler

* merge_reasoning_content_in_choices

* _optional_combine_thinking_block_in_choices

* test_bedrock_stream_thinking_content_openwebui

* merge_reasoning_content_in_choices

* fix for _optional_combine_thinking_block_in_choices

* linting error fix
2025-03-06 18:32:58 -08:00
Ishaan Jaff
f47987e673
(Refactor) /v1/messages to follow simpler logic for Anthropic API spec (#9013)
* anthropic_messages_handler v0

* fix /messages

* working messages with router methods

* test_anthropic_messages_handler_litellm_router_non_streaming

* test_anthropic_messages_litellm_router_non_streaming_with_logging

* AnthropicMessagesConfig

* _handle_anthropic_messages_response_logging

* working with /v1/messages endpoint

* working /v1/messages endpoint

* refactor to use router factory function

* use aanthropic_messages

* use BaseConfig for Anthropic /v1/messages

* track api key, team on /v1/messages endpoint

* fix get_logging_payload

* BaseAnthropicMessagesTest

* align test config

* test_anthropic_messages_with_thinking

* test_anthropic_streaming_with_thinking

* fix - display anthropic url for debugging

* test_bad_request_error_handling

* test_anthropic_messages_router_streaming_with_bad_request

* fix ProxyException

* test_bad_request_error_handling_streaming

* use provider_specific_header

* test_anthropic_messages_with_extra_headers

* test_anthropic_messages_to_wildcard_model

* fix gcs pub sub test

* standard_logging_payload

* fix unit testing for anthopic /v1/messages support

* fix pass through anthropic messages api

* delete dead code

* fix anthropic pass through response

* revert change to spend tracking utils

* fix get_litellm_metadata_from_kwargs

* fix spend logs payload json

* proxy_pass_through_endpoint_tests

* TestAnthropicPassthroughBasic

* fix pass through tests

* test_async_vertex_proxy_route_api_key_auth

* _handle_anthropic_messages_response_logging

* vertex_credentials

* test_set_default_vertex_config

* test_anthropic_messages_litellm_router_non_streaming_with_logging

* test_ageneric_api_call_with_fallbacks_basic

* test__aadapter_completion
2025-03-06 00:43:08 -08:00
Sunny Wan
a2fed4059e added Snowflake config to ProviderConfigManager 2025-03-05 20:32:18 -05:00
Sunny Wan
bdd03405fe Removed unnecessary comments 2025-03-03 18:18:24 -05:00
Sunny Wan
fd090c8043 [FEAT] Added snowflake completion provider 2025-03-03 01:20:00 -05:00
Krish Dholakia
a65bfab697
Fix calling claude via invoke route + response_format support for claude on invoke route (#8908)
* fix(anthropic_claude3_transformation.py): fix amazon anthropic claude 3 tool calling transformation on invoke route

move to using anthropic config as base

* fix(utils.py): expose anthropic config via providerconfigmanager

* fix(llm_http_handler.py): support json mode on async completion calls

* fix(invoke_handler/make_call): support json mode for anthropic called via bedrock invoke

* fix(anthropic/): handle 'response_format: {"type": "text"}` + migrate amazon claude 3 invoke config to inherit from anthropic config

Prevents error when passing in 'response_format: {"type": "text"}

* test: fix test

* fix(utils.py): fix base invoke provider check

* fix(anthropic_claude3_transformation.py): don't pass 'stream' param

* fix: fix linting errors

* fix(converse_transformation.py): handle response_format type=text for converse
2025-02-28 17:56:26 -08:00
Krish Dholakia
9914c166b7
Litellm contributor prs 02 24 2025 (#8775)
* Adding VertexAI Claude 3.7 Sonnet (#8774)

Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>

* build(model_prices_and_context_window.json): add anthropic 3-7 models on vertex ai and bedrock

* Support video_url (#8743)

* Support video_url

Support VLMs that works with video.
Example implemenation in vllm: https://github.com/vllm-project/vllm/pull/10020

* llms openai.py: Add ChatCompletionVideoObject

Add data structures to support `video_url` in chat completion

* test test_completion.py: add test for video_url

* Arize Phoenix - ensure correct endpoint/protocol are used; and default to phoenix cloud (#8750)

* minor fixes to default to http and to ensure that the correct endpoint is used

* Update test_arize_phoenix.py

* prioritize http over grpc

---------

Co-authored-by: Emerson Gomes <emerson.gomes@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Pang Wu <104795337+pang-wu@users.noreply.github.com>
Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>
2025-02-24 18:55:48 -08:00
Dragos Campean
2905ad98b3
Add anthropic3-7-sonnet (#8766)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
2025-02-24 12:59:00 -08:00
Krish Dholakia
09462ba80c
Add cohere v2/rerank support (#8421) (#8605)
* Add cohere v2/rerank support (#8421)

* Support v2 endpoint cohere rerank

* Add tests and docs

* Make v1 default if old params used

* Update docs

* Update docs pt 2

* Update tests

* Add e2e test

* Clean up code

* Use inheritence for new config

* Fix linting issues (#8608)

* Fix cohere v2 failing test + linting (#8672)

* Fix test and unused imports

* Fix tests

* fix: fix linting errors

* test: handle tgai instability

* fix: skip service unavailable err

* test: print logs for unstable test

* test: skip unreliable tests

---------

Co-authored-by: vibhavbhat <vibhavb00@gmail.com>
2025-02-22 22:25:29 -08:00
Krish Dholakia
21ea52105a
Support arize phoenix on litellm proxy (#7756) (#8715)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
* Update opentelemetry.py

wip

* Update test_opentelemetry_unit_tests.py

* fix a few paths and tests

* fix path

* Update litellm_logging.py

* accidentally removed code

* Add type for protocol

* Add and update tests

* minor changes

* update and add additional arize phoenix test

* update existing test

* address feedback

* use standard_logging_object

* address feedback

Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>
2025-02-22 20:55:11 -08:00
Krrish Dholakia
c90b9a2a89 test: fix test 2025-02-20 21:29:33 -08:00
Krish Dholakia
b682dc4ec8
Add cost tracking for rerank via bedrock (#8691)
* feat(bedrock/rerank): infer model region if model given as arn

* test: add unit testing to ensure bedrock region name inferred from arn on rerank

* feat(bedrock/rerank/transformation.py): include search units for bedrock rerank result

Resolves https://github.com/BerriAI/litellm/issues/7258#issuecomment-2671557137

* test(test_bedrock_completion.py): add testing for bedrock cohere rerank

* feat(cost_calculator.py): refactor rerank cost tracking to support bedrock cost tracking

* build(model_prices_and_context_window.json): add amazon.rerank model to model cost map

* fix(cost_calculator.py): bedrock/common_utils.py

get base model from model w/ arn -> handles rerank model

* build(model_prices_and_context_window.json): add bedrock cohere rerank pricing

* feat(bedrock/rerank): migrate bedrock config to basererank config

* Revert "feat(bedrock/rerank): migrate bedrock config to basererank config"

This reverts commit 84fae1f167.

* test: add testing to ensure large doc / queries are correctly counted

* Revert "test: add testing to ensure large doc / queries are correctly counted"

This reverts commit 4337f1657e.

* fix(migrate-jina-ai-to-rerank-config): enables cost tracking

* refactor(jina_ai/): finish migrating jina ai to base rerank config

enables cost tracking

* fix(jina_ai/rerank): e2e jina ai rerank cost tracking

* fix: cleanup dead code

* fix: fix python3.8 compatibility error

* test: fix test

* test: add e2e testing for azure ai rerank

* fix: fix linting error

* test: mark cohere as flaky
2025-02-20 21:00:18 -08:00
Krrish Dholakia
9470f57e86 build: extract <think>..</think> block for amazon deepseek r1 and put in reasoning_content 2025-02-19 21:10:38 -08:00
Ishaan Jaff
e8f387200a ci/cd run again 2025-02-17 21:37:52 -08:00
Krish Dholakia
58141df65d
Litellm dev 02 13 2025 p2 (#8525)
* fix(azure/chat/gpt_transformation.py): add 'prediction' as a support azure param

Closes https://github.com/BerriAI/litellm/issues/8500

* build(model_prices_and_context_window.json): add new 'gemini-2.0-pro-exp-02-05' model

* style: cleanup invalid json trailing commma

* feat(utils.py): support passing 'tokenizer_config' to register_prompt_template

enables passing complete tokenizer config of model to litellm

 Allows calling deepseek on bedrock with the correct prompt template

* fix(utils.py): fix register_prompt_template for custom model names

* test(test_prompt_factory.py): fix test

* test(test_completion.py): add e2e test for bedrock invoke deepseek ft model

* feat(base_invoke_transformation.py): support hf_model_name param for bedrock invoke calls

enables proxy admin to set base model for ft bedrock deepseek model

* feat(bedrock/invoke): support deepseek_r1 route for bedrock

makes it easy to apply the right chat template to that call

* feat(constants.py): store deepseek r1 chat template - allow user to get correct response from deepseek r1 without extra work

* test(test_completion.py): add e2e mock test for bedrock deepseek

* docs(bedrock.md): document new deepseek_r1 route for bedrock

allows us to use the right config

* fix(exception_mapping_utils.py): catch read operation timeout
2025-02-13 20:28:42 -08:00
Ishaan Jaff
12ac414839
(Feat) - Allow calling Nova models on /bedrock/invoke/ (#8397)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 16s
* add nova to BEDROCK_INVOKE_PROVIDERS_LITERAL

* BedrockInvokeNovaRequest

* nova + invoke config

* add AmazonInvokeNovaConfig

* AmazonInvokeNovaConfig

* run transform_request for invoke/nova models

* AmazonInvokeNovaConfig

* rename invoke tests

* fix linting error

* TestBedrockInvokeNovaJson

* TestBedrockInvokeNovaJson

* add converse_chunk_parser

* test_nova_invoke_remove_empty_system_messages

* test_nova_invoke_streaming_chunk_parsing
2025-02-08 13:03:05 -08:00
Ishaan Jaff
03f738eff6 fix test_models_by_provider 2025-02-05 19:01:00 -08:00
Ishaan Jaff
818792228c
(Refactor) - migrate bedrock invoke to BaseLLMHTTPHandler class (#8290)
* initial transform for invoke

* invoke transform_response

* working - able to make request

* working get_complete_url

* working - invoke now runs on llm_http_handler

* fix unused imports

* track litellm overhead ms

* working stream request

* sign_request transform

* sign_request update

* use has_async_custom_stream_wrapper property

* use get_async_custom_stream_wrapper in base llm http handler

* fix make_call in invoke handler

* fix invoke with streaming get_async_custom_stream_wrapper

* working bedrock async streaming with invoke

* fix make call handler for bedrock

* test_all_model_configs

* fix test_bedrock_custom_prompt_template

* sync streaming for bedrock invoke

* fix _add_stream_param_to_request_body

* test_async_text_completion_bedrock

* fix transform_request

* fix get_supported_openai_params

* fix test supports tool choice

* fix test_supports_tool_choice

* add unit test coverage for bedrock invoke transform

* fix location of transformation files

* update import loc

* fix bedrock invoke unit tests

* fix import for max completion tokens
2025-02-05 18:58:55 -08:00
Ishaan Jaff
51b9a02615 run ci/cd again 2025-02-04 22:19:57 -08:00
Ishaan Jaff
ab134b8871 ci/cd run again 2025-02-04 21:28:13 -08:00
Krish Dholakia
1105e35538
Complete o3 model support (#8183)
* fix(o_series_transformation.py): add 'reasoning_effort' as o series model param

Closes https://github.com/BerriAI/litellm/issues/8182

* fix(main.py): ensure `reasoning_effort` is a mapped openai param

* refactor(azure/): rename o1_[x] files to o_series_[x]

* refactor(base_llm_unit_tests.py): refactor testing for o series reasoning effort

* test(test_azure_o_series.py): have azure o series tests correctly inherit from base o series model tests

* feat(base_utils.py): support translating 'developer' role to 'system' role for non-openai providers

Makes it easy to switch from openai to anthropic

* fix: fix linting errors

* fix(base_llm_unit_tests.py): fix test

* fix(main.py): add missing param
2025-02-02 22:36:37 -08:00
Krish Dholakia
23f458d2da
Improved O3 + Azure O3 support (#8181)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
* fix: support azure o3 model family for fake streaming workaround (#8162)

* fix: support azure o3 model family for fake streaming workaround

* refactor: rename helper to is_o_series_model for clarity

* update function calling parameters for o3 models (#8178)

* refactor(o1_transformation.py): refactor o1 config to be o series config, expand o series model check to o3

ensures max_tokens is correctly translated for o3

* feat(openai/): refactor o1 files to be 'o_series' files

expands naming to cover o3

* fix(azure/chat/o1_handler.py): azure openai is an instance of openai - was causing resets

* test(test_azure_o_series.py): assert stream faked for azure o3 mini

Resolves https://github.com/BerriAI/litellm/pull/8162

* fix(o1_transformation.py): fix o1 transformation logic to handle explicit o1_series routing

* docs(azure.md): update doc with `o_series/` model name

---------

Co-authored-by: byrongrogan <47910641+byrongrogan@users.noreply.github.com>
Co-authored-by: Low Jian Sheng <15527690+lowjiansheng@users.noreply.github.com>
2025-02-01 09:52:28 -08:00
Ishaan Jaff
a74ecf5dbc new release 2025-01-31 21:07:42 -08:00
Ishaan Jaff
9ff27809b2
(Feat) add bedrock/deepseek custom import models (#8132)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 16s
* add support for using llama spec with bedrock

* fix get_bedrock_invoke_provider

* add support for using bedrock provider in mappings

* working request

* test_bedrock_custom_deepseek

* test_bedrock_custom_deepseek

* fix _get_model_id_for_llama_like_model

* test_bedrock_custom_deepseek

* doc DeepSeek-R1-Distill-Llama-70B

* test_bedrock_custom_deepseek
2025-01-31 18:40:44 -08:00
Krish Dholakia
de261e2120
Doc updates + management endpoint fixes (#8138)
* Litellm dev 01 29 2025 p4 (#8107)

* fix(key_management_endpoints.py): always get db team

Fixes https://github.com/BerriAI/litellm/issues/7983

* test(test_key_management.py): add unit test enforcing check_db_only is always true on key generate checks

* test: fix test

* test: skip gemini thinking

* Litellm dev 01 29 2025 p3 (#8106)

* fix(__init__.py): reduces size of __init__.py and reduces scope for errors by using correct param

* refactor(__init__.py): refactor init by cleaning up redundant params

* refactor(__init__.py): move more constants into constants.py

cleanup root

* refactor(__init__.py): more cleanup

* feat(__init__.py): expose new 'disable_hf_tokenizer_download' param

enables hf model usage in offline env

* docs(config_settings.md): document new disable_hf_tokenizer_download param

* fix: fix linting error

* fix: fix unsafe comparison

* test: fix test

* docs(public_teams.md): add doc showing how to expose public teams for users to join

* docs: add beta disclaimer on public teams

* test: update tests
2025-01-30 22:56:41 -08:00
Krish Dholakia
69a6da4727
Litellm dev 01 30 2025 p2 (#8134)
* feat(lowest_tpm_rpm_v2.py): fix redis cache check to use >= instead of >

makes it consistent

* test(test_custom_guardrails.py): add more unit testing on default on guardrails

ensure it runs if user sent guardrail list is empty

* docs(quick_start.md): clarify default on guardrails run even if user guardrails list contains other guardrails

* refactor(litellm_logging.py): refactor no-log to helper util

allows for more consistent behavior

* feat(litellm_logging.py): add event hook to verbose logs

* fix(litellm_logging.py): add unit testing to ensure `litellm.disable_no_log_param` is respected

* docs(logging.md): document how to disable 'no-log' param

* test: fix test to handle feb

* test: cleanup old bedrock model

* fix: fix router check
2025-01-30 22:18:53 -08:00
Ishaan Jaff
8a235e7d38
(Refactor / QA) - Use LoggingCallbackManager to append callbacks and ensure no duplicate callbacks are added (#8112)
* LoggingCallbackManager

* add logging_callback_manager

* use logging_callback_manager

* add add_litellm_failure_callback

* use add_litellm_callback

* use add_litellm_async_success_callback

* add_litellm_async_failure_callback

* linting fix

* fix logging callback manager

* test_duplicate_multiple_loggers_test

* use _reset_all_callbacks

* fix testing with dup callbacks

* test_basic_image_generation

* reset callbacks for tests

* fix check for _add_custom_logger_to_list

* fix test_amazing_sync_embedding

* fix _get_custom_logger_key

* fix batches testing

* fix _reset_all_callbacks

* fix _check_callback_list_size

* add callback_manager_test

* fix test gemini-2.0-flash-thinking-exp-01-21
2025-01-30 19:35:50 -08:00
Krish Dholakia
9c20c69915
Fix bedrock model pricing + add unit test using bedrock pricing api (#7978)
* test(test_completion_cost.py): add unit testing to ensure all bedrock models with region name have cost tracked

* feat: initial script to get bedrock pricing from amazon api

ensures bedrock pricing is accurate

* build(model_prices_and_context_window.json): correct bedrock model prices based on api check

ensures accurate bedrock pricing

* ci(config.yml): add bedrock pricing check to ci/cd

ensures litellm always maintains up-to-date pricing for bedrock models

* ci(config.yml): add beautiful soup to ci/cd

* test: bump groq model

* test: fix test
2025-01-28 17:57:49 -08:00
Ishaan Jaff
669b4fc955
(Prometheus) - emit key budget metrics on startup (#8002)
* add UI_SESSION_TOKEN_TEAM_ID

* add type KeyListResponseObject

* add _list_key_helper

* _initialize_api_key_budget_metrics

* key / budget metrics

* init key budget metrics on startup

* test_initialize_api_key_budget_metrics

* fix linting

* test_list_key_helper

* test_initialize_remaining_budget_metrics_exception_handling
2025-01-25 10:37:52 -08:00
Krish Dholakia
c2fa213ae2
add type annotation for litellm.api_base (#7980) (#7994)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
Co-authored-by: Frederick Robinson <frederick.robinson@frrad.com>
2025-01-25 07:31:19 -08:00
Ishaan Jaff
74caef0843
(Feat) - Add GCS Pub/Sub Logging integration for sending DB SpendLogs to BigQuery (#7976)
* add pub_sub

* fix custom batch logger for GCS PUB/SUB

* GCS_PUBSUB_PROJECT_ID

* e2e gcs pub sub

* add gcs pub sub

* fix logging

* add GcsPubSubLogger

* fix pub sub

* add pub sub

* docs gcs pub / sub

* docs on pub sub controls

* test_gcs_pub_sub

* fix publish_message

* test_async_gcs_pub_sub

* test_async_gcs_pub_sub
2025-01-24 20:57:20 -08:00
Krish Dholakia
4911cd80a1
fix(utils.py): move adding custom logger callback to success event in… (#7905)
* fix(utils.py): move adding custom logger callback to success event into separate function + don't add success callback to failure event

if user is explicitly choosing 'success' callback, don't log failure as well

* test(test_utils.py): add unit test to ensure custom logger callback only adds callback to specific event

* fix(utils.py): remove string from list of callbacks once corresponding callback class is added

prevents floating values - simplifies testing

* fix(utils.py): fix linting error

* test: cleanup args before test

* test: fix test

* test: update test

* test: fix test
2025-01-22 21:49:09 -08:00