Commit graph

2223 commits

Author SHA1 Message Date
Krish Dholakia
5c107c64dd
Add gemini audio input support + handle special tokens in sagemaker response (#9640)
* fix(internal_user_endpoints.py): cleanup unused variables on beta endpoint

no team/org split on daily user endpoint

* build(model_prices_and_context_window.json): gemini-2.0-flash supports audio input

* feat(gemini/transformation.py): support passing audio input to gemini

* test: fix test

* fix(gemini/transformation.py): support audio input as a url

enables passing google cloud bucket urls

* fix(gemini/transformation.py): support explicitly passing format of file

* fix(gemini/transformation.py): expand support for inferred file types from url

* fix(sagemaker/completion/transformation.py): fix special token error when counting sagemaker tokens

* test: fix import
2025-03-29 19:23:09 -07:00
Krish Dholakia
9b7ebb6a7d
build(pyproject.toml): add new dev dependencies - for type checking (#9631)
* build(pyproject.toml): add new dev dependencies - for type checking

* build: reformat files to fit black

* ci: reformat to fit black

* ci(test-litellm.yml): make tests run clear

* build(pyproject.toml): add ruff

* fix: fix ruff checks

* build(mypy/): fix mypy linting errors

* fix(hashicorp_secret_manager.py): fix passing cert for tls auth

* build(mypy/): resolve all mypy errors

* test: update test

* fix: fix black formatting

* build(pre-commit-config.yaml): use poetry run black

* fix(proxy_server.py): fix linting error

* fix: fix ruff safe representation error
2025-03-29 11:02:13 -07:00
Krish Dholakia
5ac61a7572
Add bedrock latency optimized inference support (#9623)
* fix(converse_transformation.py): add performanceConfig param support on bedrock

Closes https://github.com/BerriAI/litellm/issues/7606

* fix(converse_transformation.py): refactor to use more flexible single getter for params which are separate config blocks

* test(test_main.py): add e2e mock test for bedrock performance config

* build(model_prices_and_context_window.json): add versioned multimodal embedding

* refactor(multimodal_embeddings/): migrate to config pattern

* feat(vertex_ai/multimodalembeddings): calculate usage for multimodal embedding calls

Enables cost calculation for multimodal embeddings

* feat(vertex_ai/multimodalembeddings): get usage object for embedding calls

ensures accurate cost tracking for vertexai multimodal embedding calls

* fix(embedding_handler.py): remove unused imports

* fix: fix linting errors

* fix: handle response api usage calculation

* test(test_vertex_ai_multimodal_embedding_transformation.py): update tests

* test: mark flaky test

* feat(vertex_ai/multimodal_embeddings/transformation.py): support text+image+video input

* docs(vertex.md): document sending text + image to vertex multimodal embeddings

* test: remove incorrect file

* fix(multimodal_embeddings/transformation.py): fix linting error

* style: remove unused import
2025-03-29 00:23:09 -07:00
Krish Dholakia
4879e1ecf0
fix(openrouter/chat/transformation.py): raise informative message for openrouter key error (#9626)
Related Issue: https://github.com/Aider-AI/aider/issues/3550#issuecomment-2763052355
2025-03-28 20:24:28 -07:00
NickGrab
220d4c07f4
Merge pull request #9625 from BerriAI/litellm_mar_28_vertex_fix
Add support to Vertex AI transformation for anyOf union type with null fields
2025-03-28 16:09:29 -07:00
Krish Dholakia
222898d727
Fix anthropic thinking + response_format (#9594)
* fix(anthropic/chat/transformation.py): Don't set tool choice on response_format conversion when thinking is enabled

Not allowed by Anthropic

Fixes https://github.com/BerriAI/litellm/issues/8901

* refactor: move test to base anthropic chat tests

ensures consistent behaviour across vertex/anthropic/bedrock

* fix(anthropic/chat/transformation.py): if thinking token is specified and max tokens is not - ensure max token to anthropic is higher than thinking tokens

* feat(converse_transformation.py): correctly handle thinking + response format on Bedrock Converse

Fixes https://github.com/BerriAI/litellm/issues/8901

* fix(converse_transformation.py): correctly handle adding max tokens

* test: handle service unavailable error
2025-03-28 15:57:40 -07:00
Nicholas Grabar
1f2bbda11d Add recursion depth to convert_anyof_null_to_nullable, constants.py. Fix recursive_detector.py raise error state 2025-03-28 13:11:19 -07:00
Krish Dholakia
0865e52db3
fix(proxy_server.py): get master key from environment, if not set in … (#9617)
* fix(proxy_server.py): get master key from environment, if not set in general settings or general settings not set at all

* test: mark flaky test

* test(test_proxy_server.py): mock prisma client

* ci: add new github workflow for testing just the mock tests

* fix: fix linting error

* ci(conftest.py): add conftest.py to isolate proxy tests

* build(pyproject.toml): add respx to dev dependencies

* build(pyproject.toml): add prisma to dev dependencies

* test: fix mock prompt management tests to use a mock anthropic key

* ci(test-litellm.yml): parallelize mock testing

make it run faster

* build(pyproject.toml): add hypercorn as dev dep

* build(pyproject.toml): separate proxy vs. core dev dependencies

make it easier for non-proxy contributors to run tests locally - e.g. no need to install hypercorn

* ci(test-litellm.yml): pin python version

* test(test_rerank.py): move test - cannot be mocked, requires aws credentials for e2e testing

* ci: add thank you message to ci

* test: add mock env var to test

* test: add autouse to tests

* test: test mock env vars for e2e tests
2025-03-28 12:32:04 -07:00
Ishaan Jaff
e7181395ff fix code quality check 2025-03-28 10:32:39 -07:00
NickGrab
b72fbdde74
Merge branch 'main' into litellm_8864-feature-vertex-anyOf-support 2025-03-28 10:25:04 -07:00
Krrish Dholakia
69e28b92c6 fix: fix python38 linting error 2025-03-28 09:38:32 -07:00
Krrish Dholakia
34a1a48af1 fix(common_utils.py): fix linting error
Some checks failed
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 20s
Publish Prisma Migrations / publish-migrations (push) Failing after 1m15s
2025-03-27 23:31:58 -07:00
Krish Dholakia
ccbac691e5
Support discovering gemini, anthropic, xai models by calling their /v1/model endpoint (#9530)
* fix: initial commit for adding provider model discovery to gemini

* feat(gemini/): add model discovery for gemini/ route

* docs(set_keys.md): update docs to show you can check available gemini models as well

* feat(anthropic/): add model discovery for anthropic api key

* feat(xai/): add model discovery for XAI

enables checking what models an xai key can call

* ci: bump ci config yml

* fix(topaz/common_utils.py): fix linting error

* fix: fix linting error for python38
2025-03-27 22:50:48 -07:00
Krish Dholakia
340a63bace
fix(mistral_chat_transformation.py): add missing comma (#9606)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 17s
Helm unit test / unit-test (push) Successful in 17s
2025-03-27 22:16:21 -07:00
Krish Dholakia
b9d0f460e8
Revert "Support max_completion_tokens on Mistral (#9589)" (#9604)
This reverts commit fef5d23dd5.
2025-03-27 19:14:26 -07:00
Chris Mancuso
fef5d23dd5
Support max_completion_tokens on Mistral (#9589)
* Support max_completion_tokens on Mistral

* test fix
2025-03-27 17:27:19 -07:00
Krish Dholakia
c0845fec1f
Add OpenAI gpt-4o-transcribe support (#9517)
* refactor: introduce new transformation config for gpt-4o-transcribe models

* refactor: expose new transformation configs for audio transcription

* ci: fix config yml

* feat(openai/transcriptions): support provider config transformation on openai audio transcriptions

allows gpt-4o and whisper audio transformation to work as expected

* refactor: migrate fireworks ai + deepgram to new transform request pattern

* feat(openai/): working support for gpt-4o-audio-transcribe

* build(model_prices_and_context_window.json): add gpt-4o-transcribe to model cost map

* build(model_prices_and_context_window.json): specify what endpoints are supported for `/audio/transcriptions`

* fix(get_supported_openai_params.py): fix return

* refactor(deepgram/): migrate unit test to deepgram handler

* refactor: cleanup unused imports

* fix(get_supported_openai_params.py): fix linting error

* test: update test
2025-03-26 23:10:25 -07:00
Ishaan Jaff
8499a88e4a fixes - anthropic messages interface 2025-03-26 17:45:47 -07:00
Ishaan Jaff
9eb9a369bb working anthropic API tests 2025-03-26 17:34:41 -07:00
Krish Dholakia
4351c77253
Support Gemini audio token cost tracking + fix openai audio input token cost tracking (#9535)
* fix(vertex_and_google_ai_studio_gemini.py): log gemini audio tokens in usage object

enables accurate cost tracking

* refactor(vertex_ai/cost_calculator.py): refactor 128k+ token cost calculation to only run if model info has it

Google has moved away from this for gemini-2.0 models

* refactor(vertex_ai/cost_calculator.py): migrate to usage object for more flexible data passthrough

* fix(llm_cost_calc/utils.py): support audio token cost tracking in generic cost per token

enables vertex ai cost tracking to work with audio tokens

* fix(llm_cost_calc/utils.py): default to total prompt tokens if text tokens field not set

* refactor(llm_cost_calc/utils.py): move openai cost tracking to generic cost per token

more consistent behaviour across providers

* test: add unit test for gemini audio token cost calculation

* ci: bump ci config

* test: fix test
2025-03-26 17:26:25 -07:00
Ishaan Jaff
8dcdff9280 fix anthropic_messages 2025-03-26 17:21:14 -07:00
Ishaan Jaff
3640262dbf fix anthropic_messages implementation 2025-03-26 17:12:40 -07:00
Ishaan Jaff
957b7eb82c define types for response form AnthropicMessagesResponse 2025-03-26 16:54:45 -07:00
Ishaan Jaff
b7f4abd13a
Merge pull request #9542 from BerriAI/litellm_fix_vertex_ai_ft_models
[Feature]: Support for Fine-Tuned Vertex AI LLMs
2025-03-26 16:19:41 -07:00
Ishaan Jaff
0aae9aa24a rename _is_model_gemini_spec_model 2025-03-26 14:28:26 -07:00
Ishaan Jaff
8eaf4c55c0 test_gemini_fine_tuned_model_request_consistency 2025-03-26 14:18:11 -07:00
Ishaan Jaff
93daf5cbac _get_model_name_from_gemini_spec_model 2025-03-26 12:16:18 -07:00
Krish Dholakia
801ecb6517
Nova Canvas complete image generation tasks (#9177) (#9525)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 17s
Helm unit test / unit-test (push) Successful in 22s
* Nova Canvas complete image generation tasks (#9177)

* add initial support for Amazon Nova Canvas model

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* adjust name to AmazonNovaCanvas and map function variables to config

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* tighten model name check

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* fix quality mapping

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add premium quality in config

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* support all Amazon Nova Canvas tasks

* remove unused import

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add tests for image generation tasks and fix payload

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add missing util file

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* update model prices backup file

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* remove image tasks other than text->image

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add color guided generation task for Nova Canvas

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* fix merge

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add nova canvas image generation documentation

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add nova canvas unit tests

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

---------

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* ci(config.yml): bump ci config

* test: fix test

---------

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>
Co-authored-by: omrishiv <327609+omrishiv@users.noreply.github.com>
2025-03-26 11:28:20 -07:00
Ishaan Jaff
793a920caa rename _is_model_gemini_spec_model 2025-03-26 11:14:51 -07:00
Ishaan Jaff
baa9b34950 Merge branch 'main' into litellm_fix_vertex_ai_ft_models 2025-03-26 11:11:54 -07:00
Ishaan Jaff
8a72b67b18 undo code changes 2025-03-26 10:57:08 -07:00
Ishaan Jaff
bbe69a47a9 _is_model_gemini_gemini_spec_model 2025-03-26 10:53:23 -07:00
Ishaan Jaff
2bef0481af _transform_request_body 2025-03-26 00:05:45 -07:00
Krish Dholakia
6fd18651d1
Support litellm.api_base for vertex_ai + gemini/ across completion, embedding, image_generation (#9516)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 20s
* test(tests): add unit testing for litellm_proxy integration

* fix(cost_calculator.py): fix tracking cost in sdk when calling proxy

* fix(main.py): respect litellm.api_base on `vertex_ai/` and `gemini/` routes

* fix(main.py): consistently support custom api base across gemini + vertexai on embedding + completion

* feat(vertex_ai/): test

* fix: fix linting error

* test: set api base as None before starting loadtest
2025-03-25 23:46:20 -07:00
Nicholas Grabar
f68cc26f15 8864 Add support for anyOf union type while handling null fields 2025-03-25 22:37:28 -07:00
Krish Dholakia
92883560f0
fix vertex ai multimodal embedding translation (#9471)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 20s
Helm unit test / unit-test (push) Successful in 24s
* remove data:image/jpeg;base64, prefix from base64 image input

vertex_ai's multimodal embeddings endpoint expects a raw base64 string without `data:image/jpeg;base64,` prefix.

* Add Vertex Multimodal Embedding Test

* fix(test_vertex.py): add e2e tests on multimodal embeddings

* test: unit testing

* test: remove sklearn dep

* test: update test with fixed route

* test: fix test

---------

Co-authored-by: Jonarod <jonrodd@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
2025-03-24 23:23:28 -07:00
Krish Dholakia
a619580bf8
Add vertexai topLogprobs support (#9518)
* Added support for top_logprobs in vertex gemini models

* Testing for top_logprobs feature in vertexai

* Update litellm/llms/vertex_ai/gemini/vertex_and_google_ai_studio_gemini.py

Co-authored-by: Tom Matthews <tomukmatthews@gmail.com>

* refactor(tests/): refactor testing to be in correct repo

---------

Co-authored-by: Aditya Thaker <adityathaker28@gmail.com>
Co-authored-by: Tom Matthews <tomukmatthews@gmail.com>
2025-03-24 22:42:38 -07:00
Ishaan Jaff
12639b7ccf fix sagemaker streaming error 2025-03-24 21:29:29 -07:00
Krrish Dholakia
5089dbfcfb fix(invoke_handler.py): remove hard code 2025-03-24 17:58:26 -07:00
Krrish Dholakia
06e69a414e fix(vertex_ai/common_utils.py): fix handling constructed url with default vertex config 2025-03-22 11:32:01 -07:00
Krrish Dholakia
94d3413335 refactor(llm_passthrough_endpoints.py): refactor vertex passthrough to use common llm passthrough handler.py 2025-03-22 10:42:46 -07:00
Krrish Dholakia
48e6a7036b test: mock sagemaker tests 2025-03-21 16:21:18 -07:00
Krrish Dholakia
86be28b640 fix: fix linting error 2025-03-21 12:20:21 -07:00
Krrish Dholakia
e7ef14398f fix(anthropic/chat/transformation.py): correctly update response_format to tool call transformation
Fixes https://github.com/BerriAI/litellm/issues/9411
2025-03-21 10:20:21 -07:00
Ishaan Jaff
c44fe8bd90
Merge pull request #9419 from BerriAI/litellm_streaming_o1_pro
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 21s
[Feat] OpenAI o1-pro Responses API streaming support
2025-03-20 21:54:43 -07:00
Krish Dholakia
ab385848c1
Merge pull request #9260 from Grizzly-jobs/fix/voyage-ai-token-usage-tracking
fix: VoyageAI `prompt_token` always empty
2025-03-20 14:00:51 -07:00
Ishaan Jaff
1829cc2042 fix code quality checks 2025-03-20 13:57:35 -07:00
Krish Dholakia
706bcf4432
Merge pull request #9366 from JamesGuthrie/jg/vertex-output-dimensionality
fix: VertexAI outputDimensionality configuration
2025-03-20 13:55:33 -07:00
Ishaan Jaff
a29587e178 MockResponsesAPIStreamingIterator 2025-03-20 12:30:09 -07:00
Ishaan Jaff
55115bf520 transform_responses_api_request 2025-03-20 12:28:55 -07:00