Krrish Dholakia
|
344353d363
|
fix(anthropic.py): remove raise error on 'empty content'
Fixes https://github.com/BerriAI/litellm/issues/3453
|
2024-05-06 11:42:09 -07:00 |
|
Krish Dholakia
|
b3c9a8fbbb
|
Merge pull request #3299 from themrzmaster/main
Allowing extra headers for bedrock
|
2024-05-06 07:45:53 -07:00 |
|
Krrish Dholakia
|
80a1344c55
|
fix(utils.py): anthropic error handling
|
2024-05-06 07:25:12 -07:00 |
|
Lucca Zenóbio
|
146a49103f
|
Merge branch 'main' into main
|
2024-05-06 09:40:23 -03:00 |
|
Jack Collins
|
218f15de60
|
Fix: get format from data not optional_params ollama non-stream completion
|
2024-05-05 18:59:26 -07:00 |
|
Jack Collins
|
5bc934303c
|
Add missing import itertools.chain
|
2024-05-05 18:54:08 -07:00 |
|
Jack Collins
|
5393c5459e
|
Fix: Set finish_reason to tool_calls for non-stream responses in ollama
|
2024-05-05 18:52:31 -07:00 |
|
Jack Collins
|
2d43423138
|
Parse streamed function calls as single delta in ollama
|
2024-05-05 18:52:20 -07:00 |
|
Jack Collins
|
8b5af7e4ce
|
Make newline same in async function
|
2024-05-05 18:51:53 -07:00 |
|
Jack Collins
|
d02e9618f8
|
Fix: Set finish_reason to tool_calls for non-stream responses
|
2024-05-05 18:47:58 -07:00 |
|
Jack Collins
|
22c9c5112b
|
Parse streamed function calls as single delta
|
2024-05-05 18:47:16 -07:00 |
|
Krish Dholakia
|
fd06009199
|
Merge pull request #3455 from BerriAI/litellm_assistants_support
feat(openai.py): add support for openai assistants
|
2024-05-04 22:21:44 -07:00 |
|
Krrish Dholakia
|
91e96ce4b1
|
fix(openai.py): fix typing import for python 3.8
|
2024-05-04 21:49:30 -07:00 |
|
Krrish Dholakia
|
e7669652b5
|
fix(openai.py): fix linting error
|
2024-05-04 21:48:42 -07:00 |
|
Krrish Dholakia
|
a04b7578f5
|
feat(assistants/main.py): support litellm.get_assistants() and litellm.get_messages()
|
2024-05-04 21:30:28 -07:00 |
|
Krrish Dholakia
|
6b061e3cc5
|
feat(assistants/main.py): support 'litellm.get_threads'
|
2024-05-04 21:14:03 -07:00 |
|
Krrish Dholakia
|
300ad1dec1
|
feat(assistants/main.py): add 'add_message' endpoint
|
2024-05-04 19:56:11 -07:00 |
|
Krrish Dholakia
|
b0845d82cd
|
fix(assistants/main.py): support litellm.create_thread() call
|
2024-05-04 19:35:37 -07:00 |
|
Krrish Dholakia
|
92d548b50b
|
feat(openai.py): add support for openai assistants
v0 commit. Closes https://github.com/BerriAI/litellm/issues/2842
|
2024-05-04 17:27:48 -07:00 |
|
Krrish Dholakia
|
8f22092c65
|
fix(bedrock.py): map finish reason for bedrock
|
2024-05-04 12:45:40 -07:00 |
|
Krrish Dholakia
|
e8317d7344
|
fix(factory.py): support openai 'functions' messages
|
2024-05-04 12:33:39 -07:00 |
|
Krrish Dholakia
|
748c4e022b
|
fix(factory.py): support mapping openai 'tool' message to anthropic format
|
2024-05-04 10:14:52 -07:00 |
|
Krrish Dholakia
|
d222dd8f82
|
fix(factory.py): support 'function' openai message role for anthropic
Fixes https://github.com/BerriAI/litellm/issues/3446
|
2024-05-04 10:03:30 -07:00 |
|
Krrish Dholakia
|
cfb6df4987
|
fix(main.py): support new 'supports_system_message=False' param
Fixes https://github.com/BerriAI/litellm/issues/3325
|
2024-05-03 21:31:45 -07:00 |
|
Krish Dholakia
|
0d3aa8d323
|
Merge pull request #3433 from BerriAI/litellm_timeout_fix
fix(bedrock.py): convert httpx.timeout to boto3 valid timeout
|
2024-05-03 18:53:59 -07:00 |
|
Krrish Dholakia
|
4efaacc5ce
|
fix(anthropic.py): handle whitespace characters for anthropic calls
|
2024-05-03 17:31:34 -07:00 |
|
Krrish Dholakia
|
defed7c08a
|
fix(bedrock.py): fix boto3 config init
|
2024-05-03 17:22:06 -07:00 |
|
Krrish Dholakia
|
d06702cbf4
|
fix(anthropic.py): drop unsupported non-whitespace character value when calling anthropic with stop sequences
Fixes https://github.com/BerriAI/litellm/issues/3286
|
2024-05-03 16:59:49 -07:00 |
|
Krrish Dholakia
|
7715a9d333
|
fix(bedrock.py): convert httpx.timeout to boto3 valid timeout
Closes https://github.com/BerriAI/litellm/issues/3398
|
2024-05-03 16:24:21 -07:00 |
|
Lucca Zenóbio
|
bf2a319713
|
Merge branch 'main' into main
|
2024-05-02 09:46:34 -03:00 |
|
Krrish Dholakia
|
a241cdb818
|
fix(utils.py): fix default values
|
2024-05-01 22:24:37 -07:00 |
|
Krish Dholakia
|
52f43c8c2e
|
Merge branch 'main' into litellm_ollama_tool_call_reponse
|
2024-05-01 10:24:05 -07:00 |
|
alisalim17
|
49d7214f69
|
fix: cohere tool results
|
2024-04-29 14:20:24 +04:00 |
|
Krish Dholakia
|
7ed64709aa
|
Merge branch 'main' into litellm_common_auth_params
|
2024-04-28 08:38:06 -07:00 |
|
Krrish Dholakia
|
33b5829e31
|
fix(vertex_ai.py): support safety settings for async streaming calls
|
2024-04-27 11:43:22 -07:00 |
|
Krish Dholakia
|
26bacef87b
|
Merge branch 'main' into litellm_default_router_retries
|
2024-04-27 11:21:57 -07:00 |
|
Krrish Dholakia
|
ee68c0aaaa
|
feat(utils.py): unify common auth params across azure/vertex_ai/bedrock/watsonx
|
2024-04-27 11:06:18 -07:00 |
|
Krrish Dholakia
|
ec0bd566ef
|
fix(watsonx.py): use common litellm params for api key, api base, etc.
|
2024-04-27 10:15:27 -07:00 |
|
Krish Dholakia
|
8e720ec846
|
Revert "Fix Anthropic Messages Prompt Template function to add a third condition: list of text-content dictionaries"
|
2024-04-27 08:57:18 -07:00 |
|
Krish Dholakia
|
b7beab2e39
|
Merge pull request #3270 from simonsanvil/feature/watsonx-integration
(feat) add IBM watsonx.ai as an llm provider
|
2024-04-27 05:48:34 -07:00 |
|
Krish Dholakia
|
d9d6038f29
|
Merge pull request #3312 from hellof20/main
add safety_settings parameters to Vertex AI async mode
|
2024-04-27 05:46:06 -07:00 |
|
Emir Ayar
|
58fdd44155
|
Merge branch 'BerriAI:main' into fix-anthropic-messages-api
|
2024-04-27 11:50:04 +02:00 |
|
Krrish Dholakia
|
92bf686b10
|
fix(replicate.py): pass version if passed in
|
2024-04-26 17:11:21 -07:00 |
|
Krish Dholakia
|
37f2045181
|
Merge branch 'main' into litellm_default_router_retries
|
2024-04-26 14:52:24 -07:00 |
|
pwm
|
de5791a72b
|
add safety_settings parameters to Vertex vision async_completion function
|
2024-04-26 14:35:01 +08:00 |
|
Krrish Dholakia
|
ee518b7728
|
test(test_router_fallbacks.py): fix testing
|
2024-04-25 17:43:40 -07:00 |
|
Krrish Dholakia
|
f0bf0af2a9
|
fix(factory.py): correctly apply bos token for llama3 instruct prompt template
|
2024-04-25 17:37:17 -07:00 |
|
Krrish Dholakia
|
30f5bbd0cc
|
fix(vertex_ai.py): handle stream=false
also adds unit testing for vertex ai calls with langchain
|
2024-04-25 13:59:37 -07:00 |
|
Krrish Dholakia
|
f1b2405fe0
|
fix(router.py): fix default retry logic
|
2024-04-25 11:57:27 -07:00 |
|
Lucca Zenobio
|
e73978b0d9
|
merge
|
2024-04-25 15:00:07 -03:00 |
|