Commit graph

1276 commits

Author SHA1 Message Date
Krrish Dholakia
dacadbf624 fix(utils.py): fix anthropic streaming return usage tokens 2024-04-24 20:56:10 -07:00
Krrish Dholakia
495aebb582 fix(utils.py): fix setattr error 2024-04-24 20:19:27 -07:00
Ishaan Jaff
ca4fd85296 fix show api_base, model in timeout errors 2024-04-24 14:01:32 -07:00
Krish Dholakia
263439ee4a
Merge pull request #3098 from greenscale-ai/main
Support for Greenscale AI logging
2024-04-24 13:09:03 -07:00
Krrish Dholakia
b918f58262 fix(vertex_ai.py): raise explicit error when image url fails to download - prevents silent failure 2024-04-24 09:23:15 -07:00
Krrish Dholakia
48c2c3d78a fix(utils.py): fix streaming to not return usage dict
Fixes https://github.com/BerriAI/litellm/issues/3237
2024-04-24 08:06:07 -07:00
Krrish Dholakia
ab24f61099 fix(utils.py): fix mistral api tool calling response 2024-04-23 19:59:11 -07:00
Krish Dholakia
4acdde988f
Merge pull request #3250 from BerriAI/litellm_caching_no_cache_fix
fix(utils.py): fix 'no-cache': true when caching is turned on
2024-04-23 19:57:07 -07:00
Krrish Dholakia
d67e47d7fd fix(test_caching.py): add longer delay for async test 2024-04-23 16:13:03 -07:00
David Manouchehri
69ddd7c68f
(utils.py) - Add seed for Groq 2024-04-23 20:32:21 +00:00
Krrish Dholakia
161e836427 fix(utils.py): fix 'no-cache': true when caching is turned on 2024-04-23 12:58:30 -07:00
Simon S. Viloria
2ef4fb2efa
Merge branch 'BerriAI:main' into feature/watsonx-integration 2024-04-23 12:18:34 +02:00
Simon Sanchez Viloria
74d2ba0a23 feat - watsonx refractoring, removed dependency, and added support for embedding calls 2024-04-23 12:01:13 +02:00
David Manouchehri
6d61607ee3
(utils.py) - Fix response_format typo for Groq 2024-04-23 04:26:26 +00:00
Krrish Dholakia
be4a3de27c fix(utils.py): support deepinfra response object 2024-04-22 10:51:11 -07:00
Simon S. Viloria
a77537ddd4
Merge branch 'BerriAI:main' into feature/watsonx-integration 2024-04-21 10:35:51 +02:00
Krish Dholakia
fcde3ba213
Merge pull request #3192 from BerriAI/litellm_calculate_max_parallel_requests
fix(router.py): Make TPM limits concurrency-safe
2024-04-20 13:24:29 -07:00
Krrish Dholakia
0f69f0b44e test(test_router_max_parallel_requests.py): more extensive testing for setting max parallel requests 2024-04-20 12:56:54 -07:00
Simon S. Viloria
7b2bd2e0e8
Merge branch 'BerriAI:main' into feature/watsonx-integration 2024-04-20 21:02:54 +02:00
Krrish Dholakia
33d828a0ed fix(utils.py): map vertex ai exceptions - rate limit error 2024-04-20 11:12:05 -07:00
Simon Sanchez Viloria
6edb133733 Added support for IBM watsonx.ai models 2024-04-20 20:06:46 +02:00
Krrish Dholakia
4c78f8f309 fix(router.py): calculate max_parallel_requests from given tpm limits
use the azure formula to calculate rpm -> max_parallel_requests based on a deployment's tpm limits
2024-04-20 10:43:18 -07:00
Ishaan Jaff
63805873e7 fix - supports_vision should not raise Exception 2024-04-19 21:19:07 -07:00
Ishaan Jaff
01b1136631 fix - GetLLMProvider excepton error raise 2024-04-18 20:10:37 -07:00
David Manouchehri
f65c02d43a
(feat) - Add seed to Cohere Chat. 2024-04-18 20:57:06 +00:00
Ishaan Jaff
f610061a79
Merge pull request #3130 from BerriAI/litellm_show_vertex_project_exceptions
[FIX] -  show vertex_project, vertex_location in Vertex AI exceptions
2024-04-18 13:18:20 -07:00
Ishaan Jaff
930f8712e4 fix - track vertex_location and vertex_project in vertex exceptions 2024-04-18 12:53:33 -07:00
Krrish Dholakia
28edb77350 fix(utils.py): support prometheus failed call metrics 2024-04-18 12:29:15 -07:00
Ishaan Jaff
192e0842c6 fix - show _vertex_project, _vertex_location in exceptions 2024-04-18 11:48:43 -07:00
Nandesh Guru
c8b8f93184
Merge branch 'BerriAI:main' into main 2024-04-18 09:44:31 -07:00
Krish Dholakia
91fe668411
Merge pull request #3105 from BerriAI/litellm_fix_hashing
fix(_types.py): hash api key in UserAPIKeyAuth
2024-04-18 08:16:24 -07:00
Krrish Dholakia
6eb8fe35c8 fix(utils.py): function_setup empty message fix
fixes https://github.com/BerriAI/litellm/issues/2858
2024-04-18 07:32:29 -07:00
Krrish Dholakia
b38c09c87f fix(utils.py): fix azure streaming logic 2024-04-18 07:08:36 -07:00
Krish Dholakia
bcdf24e5aa
Merge pull request #3102 from BerriAI/litellm_vertex_ai_fixes
fix(vertex_ai.py): fix faulty async call tool calling check
2024-04-17 19:16:36 -07:00
Krrish Dholakia
a862201a84 fix(utils.py): exception mapping grpc none unknown error to api error 2024-04-17 19:12:40 -07:00
Krrish Dholakia
18e3cf8bff fix(utils.py): support azure mistral function calling 2024-04-17 19:10:26 -07:00
Krrish Dholakia
15ae7a8314 fix(utils.py): fix streaming special character flushing logic 2024-04-17 18:03:40 -07:00
Krrish Dholakia
7d0086d742 fix(utils.py): ensure streaming output parsing only applied for hf / sagemaker models
selectively applies the <s>
</s> checking
2024-04-17 17:43:41 -07:00
Krrish Dholakia
53df916f69 fix(utils.py): accept {custom_llm_provider}/{model_name} in get_model_info
fixes https://github.com/BerriAI/litellm/issues/3100
2024-04-17 16:38:53 -07:00
Krrish Dholakia
32d94feddd refactor(utils.py): make it clearer how vertex ai params are handled '
'
2024-04-17 16:20:56 -07:00
greenscale-nandesh
907e3973fd
Merge branch 'BerriAI:main' into main 2024-04-17 12:24:29 -07:00
Krish Dholakia
8febe2f573
Merge pull request #3062 from cwang/cwang/trim-messages-fix
Use `max_input_token` for `trim_messages`
2024-04-16 22:29:45 -07:00
Ishaan Jaff
9e9d55228e fix - show model, deployment, model group in vertex error 2024-04-16 19:59:34 -07:00
Krrish Dholakia
4d0d6127d8 fix(utils.py): fix get_api_base 2024-04-16 18:50:27 -07:00
greenscale-nandesh
3feb0ef897
Merge branch 'BerriAI:main' into main 2024-04-16 11:49:26 -07:00
Chen Wang
38c61a23b4
Fall back to max_tokens 2024-04-16 19:00:09 +01:00
Chen Wang
ebc889d77a
Use max_input_token for trim_messages 2024-04-16 13:36:25 +01:00
Ishaan Jaff
fb8e256aba feat - new util supports_vision 2024-04-15 18:10:12 -07:00
Krrish Dholakia
0683589029 fix(utils.py): fix timeout error - don't pass in httpx.request 2024-04-15 10:50:23 -07:00
Krish Dholakia
72b54eaad7
Merge pull request #3028 from BerriAI/litellm_anthropic_text_completion_fix
fix(anthropic_text.py): add support for async text completion calls
2024-04-15 09:26:28 -07:00