Commit graph

578 commits

Author SHA1 Message Date
Ishaan Jaff
a48b3afef8 Merge pull request #9719 from BerriAI/litellm_metrics_pod_lock_manager
[Reliability] Emit operational metrics for new DB Transaction architecture
2025-04-04 21:12:06 -07:00
Krish Dholakia
b3dac9c145 Gemini image generation output support (#9646)
* fix(gemini/transformation.py): make GET request to get uri details, if cannot be inferred

* fix: fix linting errors

* Revert "fix: fix linting errors"

This reverts commit 926a5a527f.

* fix(gemini/transformation.py): modalities param support

Partially resolves https://github.com/BerriAI/litellm/issues/9237

* feat(google_ai_studio/): add image generation support

Closes https://github.com/BerriAI/litellm/issues/9237

* fix: fix types

* fix: fix ruff check
2025-04-04 20:37:48 -07:00
Krish Dholakia
01633ac884 fix(router.py): support reusable credentials via passthrough router (#9758)
* fix(router.py): support reusable credentials via passthrough router

enables reusable vertex credentials to be used in passthrough

* test: fix test

* test(test_router_adding_deployments.py): add unit testing
2025-04-04 18:40:14 -07:00
Ishaan Jaff
ab6c9e0313 add operational metrics for pod lock manager v2 arch 2025-04-04 16:41:07 -07:00
Ishaan Jaff
98bc54b428 Merge branch 'main' into litellm_metrics_pod_lock_manager 2025-04-04 16:33:16 -07:00
Krrish Dholakia
8a23dbd2fd fix(factory.py): don't pass cache control if not set
bedrock invoke does not support this
2025-04-04 12:37:34 -07:00
Albert Örwall
fc07d324c7 Fix prompt caching for Anthropic tool calls (#9706)
* Add prompt cache support to Anhtropic tool calls

* Fix linting issue and add test
2025-04-03 20:19:21 -07:00
Krish Dholakia
0ce878e804 LiteLLM Minor Fixes & Improvements (04/02/2025) (#9725)
* Add date picker to usage tab + Add reasoning_content token tracking across all providers on streaming (#9722)

* feat(new_usage.tsx): add date picker for new usage tab

allow user to look back on their usage data

* feat(anthropic/chat/transformation.py): report reasoning tokens in completion token details

allows usage tracking on how many reasoning tokens are actually being used

* feat(streaming_chunk_builder.py): return reasoning_tokens in anthropic/openai streaming response

allows tracking reasoning_token usage across providers

* Fix update team metadata + fix bulk adding models on Ui  (#9721)

* fix(handle_add_model_submit.tsx): fix bulk adding models

* fix(team_info.tsx): fix team metadata update

Fixes https://github.com/BerriAI/litellm/issues/9689

* (v0) Unified file id - allow calling multiple providers with same file id (#9718)

* feat(files_endpoints.py): initial commit adding 'target_model_names' support

allow developer to specify all the models they want to call with the file

* feat(files_endpoints.py): return unified files endpoint

* test(test_files_endpoints.py): add validation test - if invalid purpose submitted

* feat: more updates

* feat: initial working commit of unified file id translation

* fix: additional fixes

* fix(router.py): remove model replace logic in jsonl on acreate_file

enables file upload to work for chat completion requests as well

* fix(files_endpoints.py): remove whitespace around model name

* fix(azure/handler.py): return acreate_file with correct response type

* fix: fix linting errors

* test: fix mock test to run on github actions

* fix: fix ruff errors

* fix: fix file too large error

* fix(utils.py): remove redundant var

* test: modify test to work on github actions

* test: update tests

* test: more debug logs to understand ci/cd issue

* test: fix test for respx

* test: skip mock respx test

fails on ci/cd - not clear why

* fix: fix ruff check

* fix: fix test

* fix(model_connection_test.tsx): fix linting error

* test: update unit tests
2025-04-03 11:48:52 -07:00
Ishaan Jaff
49f2cee5b6 Merge branch 'main' into litellm_metrics_pod_lock_manager 2025-04-02 21:35:55 -07:00
Krish Dholakia
354a75fb59 Squashed commit of the following: (#9709)
commit b12a9892b7
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Wed Apr 2 08:09:56 2025 -0700

    fix(utils.py): don't modify openai_token_counter

commit 294de31803
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 21:22:40 2025 -0700

    fix: fix linting error

commit cb6e9fbe40
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 19:52:45 2025 -0700

    refactor: complete migration

commit bfc159172d
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 19:09:59 2025 -0700

    refactor: refactor more constants

commit 43ffb6a558
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 18:45:24 2025 -0700

    fix: test

commit 04dbe4310c
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 18:28:58 2025 -0700

    refactor: refactor: move more constants into constants.py

commit 3c26284aff
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 18:14:46 2025 -0700

    refactor: migrate hardcoded constants out of __init__.py

commit c11e0de69d
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 18:11:21 2025 -0700

    build: migrate all constants into constants.py

commit 7882bdc787
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 18:07:37 2025 -0700

    build: initial test banning hardcoded numbers in repo
2025-04-02 21:24:54 -07:00
Ishaan Jaff
db890abe18 Merge branch 'main' into litellm_metrics_pod_lock_manager 2025-04-02 21:04:44 -07:00
Ishaan Jaff
6cdc547728 Merge branch 'main' into litellm_fix_azure_o_series 2025-04-02 20:58:52 -07:00
Ishaan Jaff
dcc91315ba track service types on prom services 2025-04-02 18:03:09 -07:00
Ishaan Jaff
a963626ec5 use service logger for tracking pod lock status 2025-04-02 17:39:48 -07:00
Krish Dholakia
0519c0c507 Add Google AI Studio /v1/files upload API support (#9645)
* test: fix import for test

* fix: fix bad error string

* docs: cleanup files docs

* fix(files/main.py): cleanup error string

* style: initial commit with a provider/config pattern for files api

google ai studio files api onboarding

* fix: test

* feat(gemini/files/transformation.py): support gemini files api response transformation

* fix(gemini/files/transformation.py): return file id as gemini uri

allows id to be passed in to chat completion request, just like openai

* feat(llm_http_handler.py): support async route for files api on llm_http_handler

* fix: fix linting errors

* fix: fix model info check

* fix: fix ruff errors

* fix: fix linting errors

* Revert "fix: fix linting errors"

This reverts commit 926a5a527f.

* fix: fix linting errors

* test: fix test

* test: fix tests
2025-04-02 08:56:58 -07:00
Krish Dholakia
4f9805c9aa fix(streaming_handler.py): fix completion start time tracking (#9688)
* fix(streaming_handler.py): fix completion start time tracking

Fixes https://github.com/BerriAI/litellm/issues/9210

* feat(anthropic/chat/transformation.py): map openai 'reasoning_effort' to anthropic 'thinking' param

Fixes https://github.com/BerriAI/litellm/issues/9022

* feat: map 'reasoning_effort' to 'thinking' param across bedrock + vertex

Closes https://github.com/BerriAI/litellm/issues/9022#issuecomment-2705260808
2025-04-01 22:00:56 -07:00
Ishaan Jaff
1b40f3d1db allowed_openai_params as a litellm param 2025-04-01 19:50:31 -07:00
Krish Dholakia
b01de8030b Openrouter streaming fixes + Anthropic 'file' message support (#9667)
* fix(openrouter/transformation.py): Handle error in openrouter stream

Fixes https://github.com/Aider-AI/aider/issues/3550

* test(test_openrouter_chat_transformation.py): add unit tests

* feat(anthropic/chat/transformation.py): add openai 'file' message content type support

Closes https://github.com/BerriAI/litellm/issues/9463

* fix(factory.py): add bedrock converse support for openai 'file' message content type

Closes https://github.com/BerriAI/litellm/issues/9463
2025-03-31 21:22:59 -07:00
Ishaan Jaff
01b16c36a0 Merge branch 'main' into litellm_anthropic_messages_improvements 2025-03-31 14:22:56 -07:00
Ishaan Jaff
f5c0afcf96 Merge pull request #9642 from BerriAI/litellm_mcp_improvements_expose_sse_urls
[Feat] - MCP improvements, add support for using SSE MCP servers
2025-03-29 19:37:57 -07:00
Krish Dholakia
70f993d3d7 Add gemini audio input support + handle special tokens in sagemaker response (#9640)
* fix(internal_user_endpoints.py): cleanup unused variables on beta endpoint

no team/org split on daily user endpoint

* build(model_prices_and_context_window.json): gemini-2.0-flash supports audio input

* feat(gemini/transformation.py): support passing audio input to gemini

* test: fix test

* fix(gemini/transformation.py): support audio input as a url

enables passing google cloud bucket urls

* fix(gemini/transformation.py): support explicitly passing format of file

* fix(gemini/transformation.py): expand support for inferred file types from url

* fix(sagemaker/completion/transformation.py): fix special token error when counting sagemaker tokens

* test: fix import
2025-03-29 19:23:09 -07:00
Ishaan Jaff
198fb23d0c fix listing mcp tools 2025-03-29 18:40:58 -07:00
Ishaan Jaff
55ad7cbda0 fix import errors without mcp 2025-03-29 17:44:32 -07:00
Ishaan Jaff
2dfd302a82 log MCP tool call metadata in SLP 2025-03-29 15:50:13 -07:00
Ishaan Jaff
7d00abc37f endpoints to list and call tools 2025-03-29 14:31:35 -07:00
Ishaan Jaff
6c7b42f575 REST API endpoint for MCP 2025-03-29 13:35:46 -07:00
Ishaan Jaff
061f98c570 mcp server manager 2025-03-29 12:51:16 -07:00
Krish Dholakia
d7b294dd0a build(pyproject.toml): add new dev dependencies - for type checking (#9631)
* build(pyproject.toml): add new dev dependencies - for type checking

* build: reformat files to fit black

* ci: reformat to fit black

* ci(test-litellm.yml): make tests run clear

* build(pyproject.toml): add ruff

* fix: fix ruff checks

* build(mypy/): fix mypy linting errors

* fix(hashicorp_secret_manager.py): fix passing cert for tls auth

* build(mypy/): resolve all mypy errors

* test: update test

* fix: fix black formatting

* build(pre-commit-config.yaml): use poetry run black

* fix(proxy_server.py): fix linting error

* fix: fix ruff safe representation error
2025-03-29 11:02:13 -07:00
Krish Dholakia
308a2fb195 Add bedrock latency optimized inference support (#9623)
* fix(converse_transformation.py): add performanceConfig param support on bedrock

Closes https://github.com/BerriAI/litellm/issues/7606

* fix(converse_transformation.py): refactor to use more flexible single getter for params which are separate config blocks

* test(test_main.py): add e2e mock test for bedrock performance config

* build(model_prices_and_context_window.json): add versioned multimodal embedding

* refactor(multimodal_embeddings/): migrate to config pattern

* feat(vertex_ai/multimodalembeddings): calculate usage for multimodal embedding calls

Enables cost calculation for multimodal embeddings

* feat(vertex_ai/multimodalembeddings): get usage object for embedding calls

ensures accurate cost tracking for vertexai multimodal embedding calls

* fix(embedding_handler.py): remove unused imports

* fix: fix linting errors

* fix: handle response api usage calculation

* test(test_vertex_ai_multimodal_embedding_transformation.py): update tests

* test: mark flaky test

* feat(vertex_ai/multimodal_embeddings/transformation.py): support text+image+video input

* docs(vertex.md): document sending text + image to vertex multimodal embeddings

* test: remove incorrect file

* fix(multimodal_embeddings/transformation.py): fix linting error

* style: remove unused import
2025-03-29 00:23:09 -07:00
Krish Dholakia
d58fe5a9f9 Add OpenAI gpt-4o-transcribe support (#9517)
* refactor: introduce new transformation config for gpt-4o-transcribe models

* refactor: expose new transformation configs for audio transcription

* ci: fix config yml

* feat(openai/transcriptions): support provider config transformation on openai audio transcriptions

allows gpt-4o and whisper audio transformation to work as expected

* refactor: migrate fireworks ai + deepgram to new transform request pattern

* feat(openai/): working support for gpt-4o-audio-transcribe

* build(model_prices_and_context_window.json): add gpt-4o-transcribe to model cost map

* build(model_prices_and_context_window.json): specify what endpoints are supported for `/audio/transcriptions`

* fix(get_supported_openai_params.py): fix return

* refactor(deepgram/): migrate unit test to deepgram handler

* refactor: cleanup unused imports

* fix(get_supported_openai_params.py): fix linting error

* test: update test
2025-03-26 23:10:25 -07:00
Krish Dholakia
9c083e7d2c Support Gemini audio token cost tracking + fix openai audio input token cost tracking (#9535)
* fix(vertex_and_google_ai_studio_gemini.py): log gemini audio tokens in usage object

enables accurate cost tracking

* refactor(vertex_ai/cost_calculator.py): refactor 128k+ token cost calculation to only run if model info has it

Google has moved away from this for gemini-2.0 models

* refactor(vertex_ai/cost_calculator.py): migrate to usage object for more flexible data passthrough

* fix(llm_cost_calc/utils.py): support audio token cost tracking in generic cost per token

enables vertex ai cost tracking to work with audio tokens

* fix(llm_cost_calc/utils.py): default to total prompt tokens if text tokens field not set

* refactor(llm_cost_calc/utils.py): move openai cost tracking to generic cost per token

more consistent behaviour across providers

* test: add unit test for gemini audio token cost calculation

* ci: bump ci config

* test: fix test
2025-03-26 17:26:25 -07:00
Ishaan Jaff
f450fbdfa0 fix response typing 2025-03-26 16:56:56 -07:00
Ishaan Jaff
620d49e1dc add type hints for AnthropicMessagesResponse 2025-03-26 16:53:33 -07:00
Krish Dholakia
7873080223 Nova Canvas complete image generation tasks (#9177) (#9525)
* Nova Canvas complete image generation tasks (#9177)

* add initial support for Amazon Nova Canvas model

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* adjust name to AmazonNovaCanvas and map function variables to config

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* tighten model name check

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* fix quality mapping

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add premium quality in config

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* support all Amazon Nova Canvas tasks

* remove unused import

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add tests for image generation tasks and fix payload

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add missing util file

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* update model prices backup file

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* remove image tasks other than text->image

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add color guided generation task for Nova Canvas

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* fix merge

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add nova canvas image generation documentation

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add nova canvas unit tests

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

---------

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* ci(config.yml): bump ci config

* test: fix test

---------

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>
Co-authored-by: omrishiv <327609+omrishiv@users.noreply.github.com>
2025-03-26 11:28:20 -07:00
Ishaan Jaff
c6424d6246 Merge branch 'main' into litellm_exp_mcp_server 2025-03-24 19:03:56 -07:00
Ishaan Jaff
f133bb07d1 fix pydantic import error 2025-03-24 07:11:48 -07:00
Ishaan Jaff
d932206bfb Merge branch 'main' into litellm_exp_mcp_server 2025-03-22 18:51:25 -07:00
Ishaan Jaff
f03f3c3d9a test_langfuse_logging_completion 2025-03-22 18:09:04 -07:00
Ishaan Jaff
4fb3fb3dff FileSearchTool 2025-03-22 17:56:14 -07:00
Ishaan Jaff
93a2b00c93 fix StandardBuiltInToolsParams 2025-03-22 17:53:06 -07:00
Ishaan Jaff
cf01f49893 fixes for web search cost tracking 2025-03-22 16:56:32 -07:00
Ishaan Jaff
8aa13bf632 WebSearchOptions 2025-03-22 15:39:04 -07:00
Ishaan Jaff
01df7a49e5 add WebSearchOptions as supported chat completion param 2025-03-22 15:37:34 -07:00
Ishaan Jaff
02c5aebc87 search_context_cost_per_query 2025-03-22 14:52:58 -07:00
Ishaan Jaff
2d05ee2a8b fix supports_web_search 2025-03-22 14:02:51 -07:00
Ishaan Jaff
13bfe7d518 Add annotations to the delta 2025-03-22 11:38:30 -07:00
Ishaan Jaff
69da0ed3b5 feat - add openai web search 2025-03-22 10:43:35 -07:00
Ishaan Jaff
616c4db12d add litellm mcp endpoints 2025-03-20 21:12:56 -07:00
Ishaan Jaff
2581797cf7 load load_tools_from_config 2025-03-20 17:36:17 -07:00
Ishaan Jaff
63d454bb8d add MCPToolRegistry 2025-03-20 17:22:12 -07:00