Krish Dholakia
4351c77253
Support Gemini audio token cost tracking + fix openai audio input token cost tracking ( #9535 )
...
* fix(vertex_and_google_ai_studio_gemini.py): log gemini audio tokens in usage object
enables accurate cost tracking
* refactor(vertex_ai/cost_calculator.py): refactor 128k+ token cost calculation to only run if model info has it
Google has moved away from this for gemini-2.0 models
* refactor(vertex_ai/cost_calculator.py): migrate to usage object for more flexible data passthrough
* fix(llm_cost_calc/utils.py): support audio token cost tracking in generic cost per token
enables vertex ai cost tracking to work with audio tokens
* fix(llm_cost_calc/utils.py): default to total prompt tokens if text tokens field not set
* refactor(llm_cost_calc/utils.py): move openai cost tracking to generic cost per token
more consistent behaviour across providers
* test: add unit test for gemini audio token cost calculation
* ci: bump ci config
* test: fix test
2025-03-26 17:26:25 -07:00
Ishaan Jaff
0aae9aa24a
rename _is_model_gemini_spec_model
2025-03-26 14:28:26 -07:00
Ishaan Jaff
c38b41f65b
test_get_supports_system_message
2025-03-26 14:26:08 -07:00
Ishaan Jaff
72f08bc6ea
unit tests for VertexGeminiConfig
2025-03-26 14:21:35 -07:00
Krish Dholakia
6fd18651d1
Support litellm.api_base
for vertex_ai + gemini/ across completion, embedding, image_generation ( #9516 )
...
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 20s
* test(tests): add unit testing for litellm_proxy integration
* fix(cost_calculator.py): fix tracking cost in sdk when calling proxy
* fix(main.py): respect litellm.api_base on `vertex_ai/` and `gemini/` routes
* fix(main.py): consistently support custom api base across gemini + vertexai on embedding + completion
* feat(vertex_ai/): test
* fix: fix linting error
* test: set api base as None before starting loadtest
2025-03-25 23:46:20 -07:00
Nicholas Grabar
f68cc26f15
8864 Add support for anyOf union type while handling null fields
2025-03-25 22:37:28 -07:00
Krish Dholakia
92883560f0
fix vertex ai multimodal embedding translation ( #9471 )
...
Read Version from pyproject.toml / read-version (push) Successful in 20s
Helm unit test / unit-test (push) Successful in 24s
* remove data:image/jpeg;base64, prefix from base64 image input
vertex_ai's multimodal embeddings endpoint expects a raw base64 string without `data:image/jpeg;base64,` prefix.
* Add Vertex Multimodal Embedding Test
* fix(test_vertex.py): add e2e tests on multimodal embeddings
* test: unit testing
* test: remove sklearn dep
* test: update test with fixed route
* test: fix test
---------
Co-authored-by: Jonarod <jonrodd@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
2025-03-24 23:23:28 -07:00
Krish Dholakia
a619580bf8
Add vertexai topLogprobs support ( #9518 )
...
* Added support for top_logprobs in vertex gemini models
* Testing for top_logprobs feature in vertexai
* Update litellm/llms/vertex_ai/gemini/vertex_and_google_ai_studio_gemini.py
Co-authored-by: Tom Matthews <tomukmatthews@gmail.com>
* refactor(tests/): refactor testing to be in correct repo
---------
Co-authored-by: Aditya Thaker <adityathaker28@gmail.com>
Co-authored-by: Tom Matthews <tomukmatthews@gmail.com>
2025-03-24 22:42:38 -07:00
Ishaan Jaff
08a4ba1b7e
Merge branch 'main' into litellm_exp_mcp_server
2025-03-24 19:03:56 -07:00
Krrish Dholakia
44e305648d
test(test_spend_management_endpoints.py): add unit testing for router + spend logs
2025-03-24 15:33:02 -07:00
Krrish Dholakia
1dc15ef5bf
test(test_spend_management_endpoints.py): guarantee consistent spend logs
2025-03-24 15:29:47 -07:00
Krrish Dholakia
e1bad1befa
test: add e2e testing
2025-03-24 15:12:18 -07:00
Krrish Dholakia
75722c4b13
test: add unit test
2025-03-24 14:45:20 -07:00
Krrish Dholakia
6a0cf3db50
fix(litellm_logging.py): always log the api base
...
Fixes issue where api base missing from spend logs due to refactor
2025-03-24 13:45:39 -07:00
Tyler Hutcherson
7864cd1f76
update redisvl dependency
2025-03-24 08:42:11 -04:00
Ishaan Jaff
6f7d618918
test tool call cost tracking
2025-03-22 19:47:13 -07:00
Ishaan Jaff
f21a0c2da7
Merge branch 'main' into litellm_exp_mcp_server
2025-03-22 18:51:25 -07:00
Krish Dholakia
d3baaf7961
Merge pull request #9467 from BerriAI/litellm_dev_03_22_2025_p1
...
Refactor vertex ai passthrough routes - fixes unpredictable behaviour w/ auto-setting default_vertex_region on router model add
2025-03-22 14:11:57 -07:00
Krrish Dholakia
3ce3689282
test: migrate testing
2025-03-22 12:48:53 -07:00
Krrish Dholakia
92d4486a2c
fix(llm_passthrough_endpoints.py): raise verbose error if credentials not found on proxy
2025-03-22 11:49:51 -07:00
Ishaan Jaff
792a2d6115
test_is_chunk_non_empty_with_annotations
2025-03-22 11:41:53 -07:00
Krrish Dholakia
be72ecc23f
test: add more e2e testing
2025-03-22 11:35:57 -07:00
Krrish Dholakia
06e69a414e
fix(vertex_ai/common_utils.py): fix handling constructed url with default vertex config
2025-03-22 11:32:01 -07:00
Krrish Dholakia
b44b3bd36b
feat(llm_passthrough_endpoints.py): base case passing for refactored vertex passthrough route
2025-03-22 11:06:52 -07:00
Krrish Dholakia
94d3413335
refactor(llm_passthrough_endpoints.py): refactor vertex passthrough to use common llm passthrough handler.py
2025-03-22 10:42:46 -07:00
Krish Dholakia
950edd76b3
Merge pull request #9454 from BerriAI/litellm_dev_03_21_2025_p3
...
Helm unit test / unit-test (push) Successful in 20s
Read Version from pyproject.toml / read-version (push) Successful in 39s
Fix route check for non-proxy admins on jwt auth
2025-03-21 22:32:46 -07:00
Ishaan Jaff
ed74b419a3
Merge pull request #9436 from BerriAI/litellm_mcp_interface
...
[Feat] LiteLLM x MCP Bridge - Use MCP Tools with LiteLLM
2025-03-21 20:42:16 -07:00
Ishaan Jaff
7b5c0de978
test_tools.py
2025-03-21 18:38:24 -07:00
Ishaan Jaff
881ac23964
test_transform_openai_tool_call_to_mcp_tool_call_request tests
2025-03-21 18:24:43 -07:00
Krrish Dholakia
1ebdeb852c
test(test_internal_user_endpoints.py): add unit testing to handle user_email=None
2025-03-21 18:06:20 -07:00
Krish Dholakia
dfb41c927e
Merge pull request #9448 from BerriAI/litellm_dev_03_21_2025_p2
...
Read Version from pyproject.toml / read-version (push) Successful in 15s
Helm unit test / unit-test (push) Successful in 19s
Set max size limit to in-memory cache item - prevents OOM errors
2025-03-21 17:51:46 -07:00
Krrish Dholakia
c7b17495a1
test: add unit testing
2025-03-21 15:01:19 -07:00
Krrish Dholakia
dfea55a1e7
fix(in_memory_cache.py): add max value limits to in-memory cache. Prevents OOM errors in prod
2025-03-21 14:51:12 -07:00
Krrish Dholakia
81a1494a51
test: add unit testing
2025-03-21 10:35:36 -07:00
Ishaan Jaff
5bc07b0c5d
test tool registry
2025-03-20 22:03:56 -07:00
Ishaan Jaff
c44fe8bd90
Merge pull request #9419 from BerriAI/litellm_streaming_o1_pro
...
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 21s
[Feat] OpenAI o1-pro Responses API streaming support
2025-03-20 21:54:43 -07:00
Ishaan Jaff
15048de5e2
test_prepare_fake_stream_request
2025-03-20 14:50:00 -07:00
Krrish Dholakia
46d68a61c8
fix: fix testing
2025-03-20 14:37:58 -07:00
Ishaan Jaff
1bd7443c25
Merge pull request #9384 from BerriAI/litellm_prompt_management_custom
...
[Feat] - Allow building custom prompt management integration
2025-03-19 21:06:41 -07:00
Ishaan Jaff
247e4d09ee
Merge branch 'main' into litellm_fix_ssl_verify
2025-03-19 21:03:06 -07:00
Ishaan Jaff
30fdd934a4
TestCustomPromptManagement
2025-03-19 17:40:15 -07:00
Krish Dholakia
9432d1a865
Merge pull request #9357 from BerriAI/litellm_dev_03_18_2025_p2
...
fix(lowest_tpm_rpm_v2.py): support batch writing increments to redis
2025-03-19 15:45:10 -07:00
Krrish Dholakia
041d5391eb
test(test_proxy_server.py): make test work on ci/cd
2025-03-19 12:01:37 -07:00
Krrish Dholakia
858da57b3c
test(test_proxy_server.py): add unit test to ensure get credentials only called behind feature flag
2025-03-19 11:44:00 -07:00
Krrish Dholakia
9adad381b4
fix(common_utils.py): handle cris only model
...
Fixes https://github.com/BerriAI/litellm/issues/9161#issuecomment-2734905153
2025-03-18 23:35:43 -07:00
Krrish Dholakia
084e8c425c
refactor(base_routing_strategy.py): fix function names
2025-03-18 22:41:02 -07:00
Krrish Dholakia
3033c40739
fix(base_routing_strategy.py): fix base to handle no running event loop
...
run in a separate thread
2025-03-18 22:20:39 -07:00
Krrish Dholakia
a3d000baaa
fix(test_base_routing_strategy.py): add unit testing for new base routing strategy test
2025-03-18 19:59:06 -07:00
Ishaan Jaff
65083ca8da
get_openai_client_cache_key
2025-03-18 18:35:50 -07:00
Ishaan Jaff
40418c7bd8
test_openai_client_reuse
2025-03-18 18:13:36 -07:00