Krish Dholakia
|
dfb41c927e
|
Merge pull request #9448 from BerriAI/litellm_dev_03_21_2025_p2
Read Version from pyproject.toml / read-version (push) Successful in 15s
Helm unit test / unit-test (push) Successful in 19s
Set max size limit to in-memory cache item - prevents OOM errors
|
2025-03-21 17:51:46 -07:00 |
|
Ishaan Jaff
|
19d6051dba
|
test mcp agent
|
2025-03-21 17:48:16 -07:00 |
|
Ishaan Jaff
|
6fb2ae8731
|
docs mcp docs update
|
2025-03-21 17:21:40 -07:00 |
|
Krrish Dholakia
|
95ef5f1009
|
refactor(user_api_key_auth.py): move is_route_allowed to inside common_checks
ensures consistent behaviour inside api key + jwt routes
|
2025-03-21 17:21:07 -07:00 |
|
Krrish Dholakia
|
91cf3fc40d
|
test: initial e2e testing to ensure non admin jwt token cannot create new teams
|
2025-03-21 16:40:18 -07:00 |
|
Krrish Dholakia
|
48e6a7036b
|
test: mock sagemaker tests
|
2025-03-21 16:21:18 -07:00 |
|
Krrish Dholakia
|
1f4cee6a57
|
test: mock sagemaker tests
|
2025-03-21 16:18:02 -07:00 |
|
Krrish Dholakia
|
8265a88e0a
|
test: update tests
|
2025-03-21 15:10:30 -07:00 |
|
Krrish Dholakia
|
c7b17495a1
|
test: add unit testing
|
2025-03-21 15:01:19 -07:00 |
|
Krrish Dholakia
|
dfea55a1e7
|
fix(in_memory_cache.py): add max value limits to in-memory cache. Prevents OOM errors in prod
|
2025-03-21 14:51:12 -07:00 |
|
Ishaan Jaff
|
b8b7e5e6cf
|
clean up
|
2025-03-21 14:39:05 -07:00 |
|
Ishaan Jaff
|
147787b9e0
|
call_openai_tool on MCP client
|
2025-03-21 14:36:32 -07:00 |
|
Krrish Dholakia
|
a1b716c1ef
|
test: fix test - handle llm api inconsistency
|
2025-03-21 10:51:34 -07:00 |
|
Ishaan Jaff
|
bbf1962540
|
fix llm responses
|
2025-03-21 10:50:55 -07:00 |
|
Ishaan Jaff
|
1a56bb5bdd
|
transform_mcp_tool_to_openai_tool
|
2025-03-21 10:49:06 -07:00 |
|
Krrish Dholakia
|
81a1494a51
|
test: add unit testing
|
2025-03-21 10:35:36 -07:00 |
|
Ishaan Jaff
|
d3279d114e
|
litellm MCP client 1
|
2025-03-21 10:32:51 -07:00 |
|
Ishaan Jaff
|
d61febc053
|
change location of MCP client
|
2025-03-21 10:30:57 -07:00 |
|
Krrish Dholakia
|
e7ef14398f
|
fix(anthropic/chat/transformation.py): correctly update response_format to tool call transformation
Fixes https://github.com/BerriAI/litellm/issues/9411
|
2025-03-21 10:20:21 -07:00 |
|
Ishaan Jaff
|
177e72334c
|
simple MCP interface
|
2025-03-21 10:11:06 -07:00 |
|
Ishaan Jaff
|
5bc07b0c5d
|
test tool registry
|
2025-03-20 22:03:56 -07:00 |
|
Ishaan Jaff
|
c44fe8bd90
|
Merge pull request #9419 from BerriAI/litellm_streaming_o1_pro
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 21s
[Feat] OpenAI o1-pro Responses API streaming support
|
2025-03-20 21:54:43 -07:00 |
|
Ishaan Jaff
|
7826c9bd21
|
add litellm mcp endpoints
|
2025-03-20 21:12:56 -07:00 |
|
Ishaan Jaff
|
0e2838ab4f
|
remove stale file
|
2025-03-20 18:00:23 -07:00 |
|
Ishaan Jaff
|
3fccf5fbef
|
mock langchain MCP interface
|
2025-03-20 17:18:28 -07:00 |
|
Ishaan Jaff
|
93836fa84a
|
sample mcp server
|
2025-03-20 15:37:24 -07:00 |
|
Ishaan Jaff
|
15048de5e2
|
test_prepare_fake_stream_request
|
2025-03-20 14:50:00 -07:00 |
|
Krrish Dholakia
|
46d68a61c8
|
fix: fix testing
|
2025-03-20 14:37:58 -07:00 |
|
Krish Dholakia
|
ab385848c1
|
Merge pull request #9260 from Grizzly-jobs/fix/voyage-ai-token-usage-tracking
fix: VoyageAI `prompt_token` always empty
|
2025-03-20 14:00:51 -07:00 |
|
Krish Dholakia
|
706bcf4432
|
Merge pull request #9366 from JamesGuthrie/jg/vertex-output-dimensionality
fix: VertexAI outputDimensionality configuration
|
2025-03-20 13:55:33 -07:00 |
|
Ishaan Jaff
|
b04cf226aa
|
test_openai_o1_pro_response_api_streaming
|
2025-03-20 13:04:49 -07:00 |
|
Ishaan Jaff
|
d915ab3f07
|
test_openai_o1_pro_response_api
|
2025-03-20 09:18:38 -07:00 |
|
Ishaan Jaff
|
7fee847ffc
|
test_openai_o1_pro_incomplete_response
|
2025-03-20 09:14:59 -07:00 |
|
Krrish Dholakia
|
8ef9129556
|
fix(types/utils.py): support openai 'file' message type
Closes https://github.com/BerriAI/litellm/issues/9365
|
2025-03-19 23:13:51 -07:00 |
|
Krrish Dholakia
|
fe24b9d90b
|
feat(azure/gpt_transformation.py): add azure audio model support
Closes https://github.com/BerriAI/litellm/issues/6305
|
2025-03-19 22:57:49 -07:00 |
|
Ishaan Jaff
|
1bd7443c25
|
Merge pull request #9384 from BerriAI/litellm_prompt_management_custom
[Feat] - Allow building custom prompt management integration
|
2025-03-19 21:06:41 -07:00 |
|
Ishaan Jaff
|
247e4d09ee
|
Merge branch 'main' into litellm_fix_ssl_verify
|
2025-03-19 21:03:06 -07:00 |
|
Ishaan Jaff
|
30fdd934a4
|
TestCustomPromptManagement
|
2025-03-19 17:40:15 -07:00 |
|
Krish Dholakia
|
9432d1a865
|
Merge pull request #9357 from BerriAI/litellm_dev_03_18_2025_p2
fix(lowest_tpm_rpm_v2.py): support batch writing increments to redis
|
2025-03-19 15:45:10 -07:00 |
|
Krrish Dholakia
|
041d5391eb
|
test(test_proxy_server.py): make test work on ci/cd
|
2025-03-19 12:01:37 -07:00 |
|
Krrish Dholakia
|
858da57b3c
|
test(test_proxy_server.py): add unit test to ensure get credentials only called behind feature flag
|
2025-03-19 11:44:00 -07:00 |
|
James Guthrie
|
437dbe7246
|
fix: VertexAI outputDimensionality configuration
VertexAI's API documentation [1] is an absolute mess. In it, they
describe the parameter to configure output dimensionality as
`output_dimensionality`. In the API example, they switch to using snake
case `outputDimensionality`, which is the correct variant.
[1]: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api#generative-ai-get-text-embedding-drest
|
2025-03-19 11:07:36 +01:00 |
|
Krish Dholakia
|
01c6cbd270
|
Merge pull request #9363 from BerriAI/litellm_dev_03_18_2025_p3
Read Version from pyproject.toml / read-version (push) Successful in 18s
Helm unit test / unit-test (push) Successful in 21s
fix(common_utils.py): handle cris only model
|
2025-03-18 23:36:12 -07:00 |
|
Krrish Dholakia
|
9adad381b4
|
fix(common_utils.py): handle cris only model
Fixes https://github.com/BerriAI/litellm/issues/9161#issuecomment-2734905153
|
2025-03-18 23:35:43 -07:00 |
|
Ishaan Jaff
|
e32aee9124
|
Merge pull request #9353 from BerriAI/litellm_arize_dynamic_logging
[Feat] - API - Allow using dynamic Arize AI Spaces on LiteLLM
|
2025-03-18 23:35:28 -07:00 |
|
Krish Dholakia
|
6347b694ee
|
Merge pull request #9335 from BerriAI/litellm_dev_03_17_2025_p3
Contributor PR: Fix sagemaker too little data for content error
|
2025-03-18 23:24:07 -07:00 |
|
Ishaan Jaff
|
8690873488
|
test_arize_dynamic_params
|
2025-03-18 23:22:55 -07:00 |
|
Ishaan Jaff
|
8568caf532
|
test_arize_dynamic_params
|
2025-03-18 23:18:07 -07:00 |
|
Krrish Dholakia
|
084e8c425c
|
refactor(base_routing_strategy.py): fix function names
|
2025-03-18 22:41:02 -07:00 |
|
Krrish Dholakia
|
3033c40739
|
fix(base_routing_strategy.py): fix base to handle no running event loop
run in a separate thread
|
2025-03-18 22:20:39 -07:00 |
|