Commit graph

1208 commits

Author SHA1 Message Date
Krrish Dholakia
fea0e6bb19 fix(test_caching.py): add longer delay for async test 2024-04-23 16:13:03 -07:00
Krrish Dholakia
04014c752b fix(utils.py): fix 'no-cache': true when caching is turned on 2024-04-23 12:58:30 -07:00
David Manouchehri
68bf14b2a5 (utils.py) - Fix response_format typo for Groq 2024-04-23 04:26:26 +00:00
Krrish Dholakia
011beb1918 fix(utils.py): support deepinfra response object 2024-04-22 10:51:11 -07:00
Krish Dholakia
70d59b1806 Merge pull request #3192 from BerriAI/litellm_calculate_max_parallel_requests
fix(router.py): Make TPM limits concurrency-safe
2024-04-20 13:24:29 -07:00
Krrish Dholakia
9f6e90e17d test(test_router_max_parallel_requests.py): more extensive testing for setting max parallel requests 2024-04-20 12:56:54 -07:00
Krrish Dholakia
b9042ba8ae fix(utils.py): map vertex ai exceptions - rate limit error 2024-04-20 11:12:05 -07:00
Krrish Dholakia
22d3121f48 fix(router.py): calculate max_parallel_requests from given tpm limits
use the azure formula to calculate rpm -> max_parallel_requests based on a deployment's tpm limits
2024-04-20 10:43:18 -07:00
Ishaan Jaff
da23efe8ed fix - supports_vision should not raise Exception 2024-04-19 21:19:07 -07:00
Ishaan Jaff
fa887dbff2 fix - GetLLMProvider excepton error raise 2024-04-18 20:10:37 -07:00
David Manouchehri
e22f22e0a9 (feat) - Add seed to Cohere Chat. 2024-04-18 20:57:06 +00:00
Ishaan Jaff
0f941678b4 Merge pull request #3130 from BerriAI/litellm_show_vertex_project_exceptions
[FIX] -  show vertex_project, vertex_location in Vertex AI exceptions
2024-04-18 13:18:20 -07:00
Ishaan Jaff
177bc683b3 fix - track vertex_location and vertex_project in vertex exceptions 2024-04-18 12:53:33 -07:00
Krrish Dholakia
deccde6be1 fix(utils.py): support prometheus failed call metrics 2024-04-18 12:29:15 -07:00
Ishaan Jaff
2a18f5b8a9 fix - show _vertex_project, _vertex_location in exceptions 2024-04-18 11:48:43 -07:00
Krish Dholakia
fe5c63e80b Merge pull request #3105 from BerriAI/litellm_fix_hashing
fix(_types.py): hash api key in UserAPIKeyAuth
2024-04-18 08:16:24 -07:00
Krrish Dholakia
280d9b4405 fix(utils.py): function_setup empty message fix
fixes https://github.com/BerriAI/litellm/issues/2858
2024-04-18 07:32:29 -07:00
Krrish Dholakia
64fe5b146c fix(utils.py): fix azure streaming logic 2024-04-18 07:08:36 -07:00
Krish Dholakia
49161e3ba4 Merge pull request #3102 from BerriAI/litellm_vertex_ai_fixes
fix(vertex_ai.py): fix faulty async call tool calling check
2024-04-17 19:16:36 -07:00
Krrish Dholakia
3e49a87f8b fix(utils.py): exception mapping grpc none unknown error to api error 2024-04-17 19:12:40 -07:00
Krrish Dholakia
fdd73a4e26 fix(utils.py): support azure mistral function calling 2024-04-17 19:10:26 -07:00
Krrish Dholakia
caa46ca905 fix(utils.py): fix streaming special character flushing logic 2024-04-17 18:03:40 -07:00
Krrish Dholakia
1b4462ee70 fix(utils.py): ensure streaming output parsing only applied for hf / sagemaker models
selectively applies the <s>
</s> checking
2024-04-17 17:43:41 -07:00
Krrish Dholakia
2a2b97f093 fix(utils.py): accept {custom_llm_provider}/{model_name} in get_model_info
fixes https://github.com/BerriAI/litellm/issues/3100
2024-04-17 16:38:53 -07:00
Krrish Dholakia
72d7c36c76 refactor(utils.py): make it clearer how vertex ai params are handled '
'
2024-04-17 16:20:56 -07:00
Krish Dholakia
d55aada92a Merge pull request #3062 from cwang/cwang/trim-messages-fix
Use `max_input_token` for `trim_messages`
2024-04-16 22:29:45 -07:00
Ishaan Jaff
7bb86d7a4b fix - show model, deployment, model group in vertex error 2024-04-16 19:59:34 -07:00
Krrish Dholakia
12b6aaeb2b fix(utils.py): fix get_api_base 2024-04-16 18:50:27 -07:00
Chen Wang
4f4625c7a0 Fall back to max_tokens 2024-04-16 19:00:09 +01:00
Chen Wang
2567f9a3a6 Use max_input_token for trim_messages 2024-04-16 13:36:25 +01:00
Ishaan Jaff
511546d2fe feat - new util supports_vision 2024-04-15 18:10:12 -07:00
Krrish Dholakia
63b6165ea5 fix(utils.py): fix timeout error - don't pass in httpx.request 2024-04-15 10:50:23 -07:00
Krish Dholakia
cfd2bc030f Merge pull request #3028 from BerriAI/litellm_anthropic_text_completion_fix
fix(anthropic_text.py): add support for async text completion calls
2024-04-15 09:26:28 -07:00
Krrish Dholakia
1cd0551a1e fix(anthropic_text.py): add support for async text completion calls 2024-04-15 08:15:00 -07:00
Ishaan Jaff
3c8150914f groq - add tool calling support 2024-04-15 08:09:27 -07:00
Krrish Dholakia
866259f95f feat(prometheus_services.py): monitor health of proxy adjacent services (redis / postgres / etc.) 2024-04-13 18:15:02 -07:00
Ishaan Jaff
7d2215a809 Merge pull request #2991 from BerriAI/litellm_fix_text_completion_caching
[Feat] Support + Test caching for TextCompletion
2024-04-12 20:08:01 -07:00
Ishaan Jaff
41ec025b5c fix - support text completion caching 2024-04-12 12:34:28 -07:00
Krish Dholakia
6dbe2bef9a Merge pull request #2984 from Dev-Khant/slack-msg-truncation
truncate long slack msg
2024-04-12 08:30:08 -07:00
Dev Khant
18eae1facf truncate long slack msg 2024-04-12 17:22:14 +05:30
Krrish Dholakia
ec72202d56 fix(gemini.py): log system prompt in verbose output 2024-04-11 23:15:58 -07:00
Krrish Dholakia
4c0ba026a7 fix(utils.py): vertex ai exception mapping
fixes check which caused all vertex errors to be ratelimit errors
2024-04-11 23:04:21 -07:00
David Manouchehri
cc71ca3166 (feat) - Add support for JSON mode in Vertex AI 2024-04-12 00:03:29 +00:00
Krish Dholakia
e48cc9f1e4 Merge pull request #2942 from BerriAI/litellm_fix_router_loading
Router Async Improvements
2024-04-10 20:16:53 -07:00
Krrish Dholakia
8f06c2d8c4 fix(router.py): fix datetime object 2024-04-10 17:55:24 -07:00
Ishaan Jaff
686810ec00 fix - allow base64 cache hits embedding responses 2024-04-10 16:44:40 -07:00
Krrish Dholakia
06a0ca1e80 fix(proxy_cli.py): don't double load the router config
was causing callbacks to be instantiated twice - double couting usage in cache
2024-04-10 13:23:56 -07:00
Ishaan Jaff
3083326c33 Merge pull request #2893 from unclecode/main
Fix issue #2832: Add protected_namespaces to Config class within utils.py, router.py and completion.py to avoid the warning message.
2024-04-09 08:51:41 -07:00
Krrish Dholakia
075c96a408 fix(utils.py): fix reordering of items for cached embeddings
ensures cached embedding item is returned in correct order
2024-04-08 12:18:24 -07:00
unclecode
311e801ab4 Fix issue #2832: Add protected_namespaces to Config class within utils.py, router.py and completion.py to avoid the warning message. 2024-04-08 12:43:17 +08:00