Commit graph

2244 commits

Author SHA1 Message Date
Ishaan Jaff
baa9b34950 Merge branch 'main' into litellm_fix_vertex_ai_ft_models 2025-03-26 11:11:54 -07:00
Ishaan Jaff
8a72b67b18 undo code changes 2025-03-26 10:57:08 -07:00
Ishaan Jaff
bbe69a47a9 _is_model_gemini_gemini_spec_model 2025-03-26 10:53:23 -07:00
Ishaan Jaff
2bef0481af _transform_request_body 2025-03-26 00:05:45 -07:00
Krish Dholakia
6fd18651d1
Support litellm.api_base for vertex_ai + gemini/ across completion, embedding, image_generation (#9516)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 20s
* test(tests): add unit testing for litellm_proxy integration

* fix(cost_calculator.py): fix tracking cost in sdk when calling proxy

* fix(main.py): respect litellm.api_base on `vertex_ai/` and `gemini/` routes

* fix(main.py): consistently support custom api base across gemini + vertexai on embedding + completion

* feat(vertex_ai/): test

* fix: fix linting error

* test: set api base as None before starting loadtest
2025-03-25 23:46:20 -07:00
Nicholas Grabar
f68cc26f15 8864 Add support for anyOf union type while handling null fields 2025-03-25 22:37:28 -07:00
Krish Dholakia
92883560f0
fix vertex ai multimodal embedding translation (#9471)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 20s
Helm unit test / unit-test (push) Successful in 24s
* remove data:image/jpeg;base64, prefix from base64 image input

vertex_ai's multimodal embeddings endpoint expects a raw base64 string without `data:image/jpeg;base64,` prefix.

* Add Vertex Multimodal Embedding Test

* fix(test_vertex.py): add e2e tests on multimodal embeddings

* test: unit testing

* test: remove sklearn dep

* test: update test with fixed route

* test: fix test

---------

Co-authored-by: Jonarod <jonrodd@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
2025-03-24 23:23:28 -07:00
Krish Dholakia
a619580bf8
Add vertexai topLogprobs support (#9518)
* Added support for top_logprobs in vertex gemini models

* Testing for top_logprobs feature in vertexai

* Update litellm/llms/vertex_ai/gemini/vertex_and_google_ai_studio_gemini.py

Co-authored-by: Tom Matthews <tomukmatthews@gmail.com>

* refactor(tests/): refactor testing to be in correct repo

---------

Co-authored-by: Aditya Thaker <adityathaker28@gmail.com>
Co-authored-by: Tom Matthews <tomukmatthews@gmail.com>
2025-03-24 22:42:38 -07:00
Ishaan Jaff
12639b7ccf fix sagemaker streaming error 2025-03-24 21:29:29 -07:00
Krrish Dholakia
5089dbfcfb fix(invoke_handler.py): remove hard code 2025-03-24 17:58:26 -07:00
Krrish Dholakia
06e69a414e fix(vertex_ai/common_utils.py): fix handling constructed url with default vertex config 2025-03-22 11:32:01 -07:00
Krrish Dholakia
94d3413335 refactor(llm_passthrough_endpoints.py): refactor vertex passthrough to use common llm passthrough handler.py 2025-03-22 10:42:46 -07:00
Krrish Dholakia
48e6a7036b test: mock sagemaker tests 2025-03-21 16:21:18 -07:00
Krrish Dholakia
86be28b640 fix: fix linting error 2025-03-21 12:20:21 -07:00
Krrish Dholakia
e7ef14398f fix(anthropic/chat/transformation.py): correctly update response_format to tool call transformation
Fixes https://github.com/BerriAI/litellm/issues/9411
2025-03-21 10:20:21 -07:00
Ishaan Jaff
c44fe8bd90
Merge pull request #9419 from BerriAI/litellm_streaming_o1_pro
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 21s
[Feat] OpenAI o1-pro Responses API streaming support
2025-03-20 21:54:43 -07:00
Krish Dholakia
ab385848c1
Merge pull request #9260 from Grizzly-jobs/fix/voyage-ai-token-usage-tracking
fix: VoyageAI `prompt_token` always empty
2025-03-20 14:00:51 -07:00
Ishaan Jaff
1829cc2042 fix code quality checks 2025-03-20 13:57:35 -07:00
Krish Dholakia
706bcf4432
Merge pull request #9366 from JamesGuthrie/jg/vertex-output-dimensionality
fix: VertexAI outputDimensionality configuration
2025-03-20 13:55:33 -07:00
Ishaan Jaff
a29587e178 MockResponsesAPIStreamingIterator 2025-03-20 12:30:09 -07:00
Ishaan Jaff
55115bf520 transform_responses_api_request 2025-03-20 12:28:55 -07:00
Ishaan Jaff
0cd671785d add fake_stream to llm http handler 2025-03-20 09:55:59 -07:00
Ishaan Jaff
bc174adcd0 add should_fake_stream 2025-03-20 09:54:26 -07:00
Krrish Dholakia
fe24b9d90b feat(azure/gpt_transformation.py): add azure audio model support
Closes https://github.com/BerriAI/litellm/issues/6305
2025-03-19 22:57:49 -07:00
Ishaan Jaff
9203910ab6 fix import hashlib 2025-03-19 21:08:19 -07:00
Ishaan Jaff
247e4d09ee
Merge branch 'main' into litellm_fix_ssl_verify 2025-03-19 21:03:06 -07:00
James Guthrie
437dbe7246 fix: VertexAI outputDimensionality configuration
VertexAI's API documentation [1] is an absolute mess. In it, they
describe the parameter to configure output dimensionality as
`output_dimensionality`. In the API example, they switch to using snake
case `outputDimensionality`, which is the correct variant.

[1]: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api#generative-ai-get-text-embedding-drest
2025-03-19 11:07:36 +01:00
Krish Dholakia
01c6cbd270
Merge pull request #9363 from BerriAI/litellm_dev_03_18_2025_p3
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 18s
Helm unit test / unit-test (push) Successful in 21s
fix(common_utils.py): handle cris only model
2025-03-18 23:36:12 -07:00
Krrish Dholakia
9adad381b4 fix(common_utils.py): handle cris only model
Fixes https://github.com/BerriAI/litellm/issues/9161#issuecomment-2734905153
2025-03-18 23:35:43 -07:00
Ishaan Jaff
65083ca8da get_openai_client_cache_key 2025-03-18 18:35:50 -07:00
Ishaan Jaff
3daef0d740 fix common utils 2025-03-18 17:59:46 -07:00
Ishaan Jaff
a45830dac3 use common caching logic for openai/azure clients 2025-03-18 17:57:03 -07:00
Ishaan Jaff
f73e9047dc use common logic for re-using openai clients 2025-03-18 17:56:32 -07:00
Ishaan Jaff
55ea2370ba Union[TranscriptionResponse, Coroutine[Any, Any, TranscriptionResponse]]: 2025-03-18 14:23:14 -07:00
Ishaan Jaff
b20a69f9fc fix code quality 2025-03-18 12:58:59 -07:00
Ishaan Jaff
dc3d7b3afc test_azure_instruct 2025-03-18 12:56:11 -07:00
Ishaan Jaff
2cd49ef096 fix test_ensure_initialize_azure_sdk_client_always_used 2025-03-18 12:46:55 -07:00
Ishaan Jaff
b60178f534 fix azure chat logic 2025-03-18 12:42:24 -07:00
Ishaan Jaff
80a5cfa01d test_azure_embedding_max_retries_0 2025-03-18 12:35:34 -07:00
Ishaan Jaff
842625a6f0 :test_completion_azure_ad_toke 2025-03-18 12:25:32 -07:00
Ishaan Jaff
d4b3082ca2 fix azure embedding test 2025-03-18 12:19:12 -07:00
Ishaan Jaff
38e2dd00cc fix amebedding issue on ssl azure 2025-03-18 11:42:11 -07:00
Ishaan Jaff
dfd7a7d547 fix linting error 2025-03-18 11:38:31 -07:00
Ishaan Jaff
3458c69eb0 fix common utils 2025-03-18 11:04:02 -07:00
Ishaan Jaff
c1e0cb136e fix using azure openai clients 2025-03-18 10:47:29 -07:00
Ishaan Jaff
e34be5a3b6 use get_azure_openai_client 2025-03-18 10:28:39 -07:00
Ishaan Jaff
a0c5fb81b8 fix logic for intializing openai clients 2025-03-18 10:23:30 -07:00
Ishaan Jaff
0601768bb8 use ssl on initialize_azure_sdk_client 2025-03-18 10:14:51 -07:00
Ishaan Jaff
34142a1b62 _init_azure_client_for_cloudflare_ai_gateway 2025-03-18 10:11:54 -07:00
Ishaan Jaff
edfbf21c39 fix re-using azure openai client 2025-03-18 10:06:56 -07:00