Commit graph

1605 commits

Author SHA1 Message Date
Ishaan Jaff
bddbeff717 test_openai_web_search_logging_cost_tracking 2025-03-22 14:29:02 -07:00
Krish Dholakia
d3baaf7961
Merge pull request #9467 from BerriAI/litellm_dev_03_22_2025_p1
Refactor vertex ai passthrough routes - fixes unpredictable behaviour w/ auto-setting default_vertex_region on router model add
2025-03-22 14:11:57 -07:00
Ishaan Jaff
1d7accce9e test_supports_web_search 2025-03-22 13:49:35 -07:00
Ishaan Jaff
69c9a782b2 add supports_web_search 2025-03-22 13:32:22 -07:00
Ishaan Jaff
78c371d2e8 search_context_cost_per_query test 2025-03-22 13:08:57 -07:00
Krrish Dholakia
6b2f385ddf test: update tests 2025-03-22 12:56:42 -07:00
Ishaan Jaff
1bdb94a314 add search_context_cost_per_1k_calls to model cost map spec 2025-03-22 12:56:21 -07:00
Krrish Dholakia
6f719d0461 test: fix test 2025-03-22 12:50:58 -07:00
Krrish Dholakia
3ce3689282 test: migrate testing 2025-03-22 12:48:53 -07:00
Krrish Dholakia
92d4486a2c fix(llm_passthrough_endpoints.py): raise verbose error if credentials not found on proxy 2025-03-22 11:49:51 -07:00
Ishaan Jaff
792a2d6115 test_is_chunk_non_empty_with_annotations 2025-03-22 11:41:53 -07:00
Ishaan Jaff
5c59a6a58f test_openai_web_search_streaming 2025-03-22 11:36:34 -07:00
Krrish Dholakia
be72ecc23f test: add more e2e testing 2025-03-22 11:35:57 -07:00
Krrish Dholakia
06e69a414e fix(vertex_ai/common_utils.py): fix handling constructed url with default vertex config 2025-03-22 11:32:01 -07:00
Krrish Dholakia
b44b3bd36b feat(llm_passthrough_endpoints.py): base case passing for refactored vertex passthrough route 2025-03-22 11:06:52 -07:00
Ishaan Jaff
3dbbc89fd2 test_openai_web_search 2025-03-22 10:53:47 -07:00
Ishaan Jaff
3764aa1729 test open ai web search 2025-03-22 10:44:04 -07:00
Krrish Dholakia
94d3413335 refactor(llm_passthrough_endpoints.py): refactor vertex passthrough to use common llm passthrough handler.py 2025-03-22 10:42:46 -07:00
Krish Dholakia
950edd76b3
Merge pull request #9454 from BerriAI/litellm_dev_03_21_2025_p3
All checks were successful
Helm unit test / unit-test (push) Successful in 20s
Read Version from pyproject.toml / read-version (push) Successful in 39s
Fix route check for non-proxy admins on jwt auth
2025-03-21 22:32:46 -07:00
Krrish Dholakia
364ea3b7dc test: fix test 2025-03-21 22:02:39 -07:00
Ishaan Jaff
ed74b419a3
Merge pull request #9436 from BerriAI/litellm_mcp_interface
[Feat] LiteLLM x MCP Bridge - Use MCP Tools with LiteLLM
2025-03-21 20:42:16 -07:00
Ishaan Jaff
8a71f129bf ci_cd_server_path 2025-03-21 19:06:29 -07:00
Ishaan Jaff
7b5c0de978 test_tools.py 2025-03-21 18:38:24 -07:00
Ishaan Jaff
881ac23964 test_transform_openai_tool_call_to_mcp_tool_call_request tests 2025-03-21 18:24:43 -07:00
Krrish Dholakia
2b83882c07 test: update tests 2025-03-21 18:12:35 -07:00
Krrish Dholakia
1ebdeb852c test(test_internal_user_endpoints.py): add unit testing to handle user_email=None 2025-03-21 18:06:20 -07:00
Krish Dholakia
dfb41c927e
Merge pull request #9448 from BerriAI/litellm_dev_03_21_2025_p2
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 15s
Helm unit test / unit-test (push) Successful in 19s
Set max size limit to in-memory cache item - prevents OOM errors
2025-03-21 17:51:46 -07:00
Ishaan Jaff
19d6051dba test mcp agent 2025-03-21 17:48:16 -07:00
Ishaan Jaff
6fb2ae8731 docs mcp docs update 2025-03-21 17:21:40 -07:00
Krrish Dholakia
95ef5f1009 refactor(user_api_key_auth.py): move is_route_allowed to inside common_checks
ensures consistent behaviour inside api key + jwt routes
2025-03-21 17:21:07 -07:00
Krrish Dholakia
91cf3fc40d test: initial e2e testing to ensure non admin jwt token cannot create new teams 2025-03-21 16:40:18 -07:00
Krrish Dholakia
48e6a7036b test: mock sagemaker tests 2025-03-21 16:21:18 -07:00
Krrish Dholakia
1f4cee6a57 test: mock sagemaker tests 2025-03-21 16:18:02 -07:00
Krrish Dholakia
8265a88e0a test: update tests 2025-03-21 15:10:30 -07:00
Krrish Dholakia
c7b17495a1 test: add unit testing 2025-03-21 15:01:19 -07:00
Krrish Dholakia
dfea55a1e7 fix(in_memory_cache.py): add max value limits to in-memory cache. Prevents OOM errors in prod 2025-03-21 14:51:12 -07:00
Ishaan Jaff
b8b7e5e6cf clean up 2025-03-21 14:39:05 -07:00
Ishaan Jaff
147787b9e0 call_openai_tool on MCP client 2025-03-21 14:36:32 -07:00
Krrish Dholakia
a1b716c1ef test: fix test - handle llm api inconsistency 2025-03-21 10:51:34 -07:00
Ishaan Jaff
bbf1962540 fix llm responses 2025-03-21 10:50:55 -07:00
Ishaan Jaff
1a56bb5bdd transform_mcp_tool_to_openai_tool 2025-03-21 10:49:06 -07:00
Krrish Dholakia
81a1494a51 test: add unit testing 2025-03-21 10:35:36 -07:00
Ishaan Jaff
d3279d114e litellm MCP client 1 2025-03-21 10:32:51 -07:00
Ishaan Jaff
d61febc053 change location of MCP client 2025-03-21 10:30:57 -07:00
Krrish Dholakia
e7ef14398f fix(anthropic/chat/transformation.py): correctly update response_format to tool call transformation
Fixes https://github.com/BerriAI/litellm/issues/9411
2025-03-21 10:20:21 -07:00
Ishaan Jaff
177e72334c simple MCP interface 2025-03-21 10:11:06 -07:00
Ishaan Jaff
5bc07b0c5d test tool registry 2025-03-20 22:03:56 -07:00
Ishaan Jaff
c44fe8bd90
Merge pull request #9419 from BerriAI/litellm_streaming_o1_pro
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 21s
[Feat] OpenAI o1-pro Responses API streaming support
2025-03-20 21:54:43 -07:00
Ishaan Jaff
7826c9bd21 add litellm mcp endpoints 2025-03-20 21:12:56 -07:00
Ishaan Jaff
0e2838ab4f remove stale file 2025-03-20 18:00:23 -07:00