Ishaan Jaff
|
2a18f5b8a9
|
fix - show _vertex_project, _vertex_location in exceptions
|
2024-04-18 11:48:43 -07:00 |
|
Nandesh Guru
|
9e46d3c0ac
|
Merge branch 'BerriAI:main' into main
|
2024-04-18 09:44:31 -07:00 |
|
Krish Dholakia
|
fe5c63e80b
|
Merge pull request #3105 from BerriAI/litellm_fix_hashing
fix(_types.py): hash api key in UserAPIKeyAuth
|
2024-04-18 08:16:24 -07:00 |
|
Krrish Dholakia
|
280d9b4405
|
fix(utils.py): function_setup empty message fix
fixes https://github.com/BerriAI/litellm/issues/2858
|
2024-04-18 07:32:29 -07:00 |
|
Krrish Dholakia
|
64fe5b146c
|
fix(utils.py): fix azure streaming logic
|
2024-04-18 07:08:36 -07:00 |
|
Krish Dholakia
|
49161e3ba4
|
Merge pull request #3102 from BerriAI/litellm_vertex_ai_fixes
fix(vertex_ai.py): fix faulty async call tool calling check
|
2024-04-17 19:16:36 -07:00 |
|
Krrish Dholakia
|
3e49a87f8b
|
fix(utils.py): exception mapping grpc none unknown error to api error
|
2024-04-17 19:12:40 -07:00 |
|
Krrish Dholakia
|
fdd73a4e26
|
fix(utils.py): support azure mistral function calling
|
2024-04-17 19:10:26 -07:00 |
|
Krrish Dholakia
|
caa46ca905
|
fix(utils.py): fix streaming special character flushing logic
|
2024-04-17 18:03:40 -07:00 |
|
Krrish Dholakia
|
1b4462ee70
|
fix(utils.py): ensure streaming output parsing only applied for hf / sagemaker models
selectively applies the <s>
</s> checking
|
2024-04-17 17:43:41 -07:00 |
|
Krrish Dholakia
|
2a2b97f093
|
fix(utils.py): accept {custom_llm_provider}/{model_name} in get_model_info
fixes https://github.com/BerriAI/litellm/issues/3100
|
2024-04-17 16:38:53 -07:00 |
|
Krrish Dholakia
|
72d7c36c76
|
refactor(utils.py): make it clearer how vertex ai params are handled '
'
|
2024-04-17 16:20:56 -07:00 |
|
greenscale-nandesh
|
86ac589bdd
|
Merge branch 'BerriAI:main' into main
|
2024-04-17 12:24:29 -07:00 |
|
Krish Dholakia
|
d55aada92a
|
Merge pull request #3062 from cwang/cwang/trim-messages-fix
Use `max_input_token` for `trim_messages`
|
2024-04-16 22:29:45 -07:00 |
|
Ishaan Jaff
|
7bb86d7a4b
|
fix - show model, deployment, model group in vertex error
|
2024-04-16 19:59:34 -07:00 |
|
Krrish Dholakia
|
12b6aaeb2b
|
fix(utils.py): fix get_api_base
|
2024-04-16 18:50:27 -07:00 |
|
greenscale-nandesh
|
334772b922
|
Merge branch 'BerriAI:main' into main
|
2024-04-16 11:49:26 -07:00 |
|
Chen Wang
|
4f4625c7a0
|
Fall back to max_tokens
|
2024-04-16 19:00:09 +01:00 |
|
Chen Wang
|
2567f9a3a6
|
Use max_input_token for trim_messages
|
2024-04-16 13:36:25 +01:00 |
|
Ishaan Jaff
|
511546d2fe
|
feat - new util supports_vision
|
2024-04-15 18:10:12 -07:00 |
|
Krrish Dholakia
|
63b6165ea5
|
fix(utils.py): fix timeout error - don't pass in httpx.request
|
2024-04-15 10:50:23 -07:00 |
|
Krish Dholakia
|
cfd2bc030f
|
Merge pull request #3028 from BerriAI/litellm_anthropic_text_completion_fix
fix(anthropic_text.py): add support for async text completion calls
|
2024-04-15 09:26:28 -07:00 |
|
Krrish Dholakia
|
1cd0551a1e
|
fix(anthropic_text.py): add support for async text completion calls
|
2024-04-15 08:15:00 -07:00 |
|
Ishaan Jaff
|
3c8150914f
|
groq - add tool calling support
|
2024-04-15 08:09:27 -07:00 |
|
Krrish Dholakia
|
866259f95f
|
feat(prometheus_services.py): monitor health of proxy adjacent services (redis / postgres / etc.)
|
2024-04-13 18:15:02 -07:00 |
|
Ishaan Jaff
|
7d2215a809
|
Merge pull request #2991 from BerriAI/litellm_fix_text_completion_caching
[Feat] Support + Test caching for TextCompletion
|
2024-04-12 20:08:01 -07:00 |
|
Ishaan Jaff
|
41ec025b5c
|
fix - support text completion caching
|
2024-04-12 12:34:28 -07:00 |
|
Krish Dholakia
|
6dbe2bef9a
|
Merge pull request #2984 from Dev-Khant/slack-msg-truncation
truncate long slack msg
|
2024-04-12 08:30:08 -07:00 |
|
Dev Khant
|
18eae1facf
|
truncate long slack msg
|
2024-04-12 17:22:14 +05:30 |
|
Mikkel Gravgaard
|
1d18bf2888
|
Use DEBUG level for curl command logging
Currently, the INFO level is used, which can cause excessive logging in production.
|
2024-04-12 11:27:53 +02:00 |
|
Krrish Dholakia
|
ec72202d56
|
fix(gemini.py): log system prompt in verbose output
|
2024-04-11 23:15:58 -07:00 |
|
Krrish Dholakia
|
4c0ba026a7
|
fix(utils.py): vertex ai exception mapping
fixes check which caused all vertex errors to be ratelimit errors
|
2024-04-11 23:04:21 -07:00 |
|
David Manouchehri
|
cc71ca3166
|
(feat) - Add support for JSON mode in Vertex AI
|
2024-04-12 00:03:29 +00:00 |
|
Krish Dholakia
|
e48cc9f1e4
|
Merge pull request #2942 from BerriAI/litellm_fix_router_loading
Router Async Improvements
|
2024-04-10 20:16:53 -07:00 |
|
Krrish Dholakia
|
8f06c2d8c4
|
fix(router.py): fix datetime object
|
2024-04-10 17:55:24 -07:00 |
|
Ishaan Jaff
|
686810ec00
|
fix - allow base64 cache hits embedding responses
|
2024-04-10 16:44:40 -07:00 |
|
Krrish Dholakia
|
06a0ca1e80
|
fix(proxy_cli.py): don't double load the router config
was causing callbacks to be instantiated twice - double couting usage in cache
|
2024-04-10 13:23:56 -07:00 |
|
Ishaan Jaff
|
3083326c33
|
Merge pull request #2893 from unclecode/main
Fix issue #2832: Add protected_namespaces to Config class within utils.py, router.py and completion.py to avoid the warning message.
|
2024-04-09 08:51:41 -07:00 |
|
Krrish Dholakia
|
075c96a408
|
fix(utils.py): fix reordering of items for cached embeddings
ensures cached embedding item is returned in correct order
|
2024-04-08 12:18:24 -07:00 |
|
unclecode
|
311e801ab4
|
Fix issue #2832: Add protected_namespaces to Config class within utils.py, router.py and completion.py to avoid the warning message.
|
2024-04-08 12:43:17 +08:00 |
|
Ishaan Jaff
|
d1d3d932ca
|
Merge pull request #2879 from BerriAI/litellm_async_anthropic_api
[Feat] Async Anthropic API 97.5% lower median latency
|
2024-04-07 09:56:52 -07:00 |
|
Krrish Dholakia
|
fd67dc7556
|
fix(utils.py): fix import
|
2024-04-06 18:37:38 -07:00 |
|
Krrish Dholakia
|
179cede5a4
|
fix(utils.py): fix circular import
|
2024-04-06 18:29:51 -07:00 |
|
Ishaan Jaff
|
e3c066dcd2
|
async anthropic streaming
|
2024-04-06 17:36:56 -07:00 |
|
Krrish Dholakia
|
b145d620e0
|
fix(utils.py): add gemini api base support to 'get_api_base'
|
2024-04-06 16:08:15 -07:00 |
|
Krrish Dholakia
|
0dad78b53c
|
feat(proxy/utils.py): return api base for request hanging alerts
|
2024-04-06 15:58:53 -07:00 |
|
Krrish Dholakia
|
474afae9d0
|
fix(utils.py): fix content check in pre-call rules
|
2024-04-06 09:03:19 -07:00 |
|
Krrish Dholakia
|
94957f7cfa
|
fix(utils.py): move info statement to debug
|
2024-04-05 22:06:46 -07:00 |
|
Ishaan Jaff
|
72fddabf84
|
Merge pull request #2868 from BerriAI/litellm_add_command_r_on_proxy
Add Azure Command-r-plus on litellm proxy
|
2024-04-05 15:13:47 -07:00 |
|
Ishaan Jaff
|
f65828db26
|
Merge pull request #2861 from BerriAI/litellm_add_azure_command_r_plust
[FEAT] add azure command-r-plus
|
2024-04-05 15:13:35 -07:00 |
|