Krrish Dholakia
|
c5d1899940
|
feat(Support-pass-through-for-bedrock-endpoints): Allows pass-through support for bedrock endpoints
|
2024-08-17 17:57:43 -07:00 |
|
Ishaan Jaff
|
7fa0539890
|
fix sagemaker tests
|
2024-08-15 18:34:20 -07:00 |
|
Ishaan Jaff
|
d0bdafa49c
|
run mock tests for test_completion_sagemaker
|
2024-08-15 14:49:21 -07:00 |
|
Krish Dholakia
|
bda1ee16a9
|
Merge branch 'main' into litellm_azure_ai_openai_support
|
2024-08-14 17:53:27 -07:00 |
|
Ishaan Jaff
|
d3bdfac7a2
|
Merge pull request #5210 from BerriAI/litellm_add_prompt_caching_support
[Feat] Add Anthropic API Prompt Caching Support
|
2024-08-14 17:43:01 -07:00 |
|
Ishaan Jaff
|
b17437e7b1
|
move claude prompt caching to diff file
|
2024-08-14 16:28:12 -07:00 |
|
Ishaan Jaff
|
0290b15333
|
add test for caching tool calls
|
2024-08-14 16:19:14 -07:00 |
|
Ishaan Jaff
|
30f24f6940
|
fix bedrock test
|
2024-08-14 15:27:20 -07:00 |
|
Ishaan Jaff
|
14bb0c670c
|
test test_anthropic_api_prompt_caching_basic
|
2024-08-14 15:06:10 -07:00 |
|
Ishaan Jaff
|
c14a432607
|
test amnthropic prompt caching
|
2024-08-14 14:59:46 -07:00 |
|
Krrish Dholakia
|
2f47348d8e
|
fix(factory.py): support assistant messages as a list of dictionaries - cohere messages api
Fixes https://github.com/BerriAI/litellm/pull/5121
|
2024-08-14 14:19:59 -07:00 |
|
Krrish Dholakia
|
3026e69926
|
fix(utils.py): support calling openai models via azure_ai/
|
2024-08-14 13:41:04 -07:00 |
|
Ishaan Jaff
|
55d6b3c234
|
fix ci/cd pipeline
|
2024-08-13 20:33:33 -07:00 |
|
Ishaan Jaff
|
e55c70e612
|
ci/cd run again
|
2024-08-09 19:13:52 -07:00 |
|
Krrish Dholakia
|
df4e84e14e
|
build(test_completion.py): ci/cd run again
|
2024-08-09 12:28:26 -07:00 |
|
Krrish Dholakia
|
8637988bc0
|
fix(huggingface_restapi.py): support passing 'wait_for_model' param on completion calls
|
2024-08-09 09:25:19 -07:00 |
|
Ishaan Jaff
|
1a4fae923c
|
handle anthropic internal server errors
|
2024-08-08 19:14:31 -07:00 |
|
Krrish Dholakia
|
35f08ece79
|
fix(factory.py): handle openai function message having tool call id
|
2024-08-08 17:06:50 -07:00 |
|
Krrish Dholakia
|
44b6036560
|
test(test_completion.py): fix merge error
|
2024-08-07 20:31:38 -07:00 |
|
Krish Dholakia
|
7d28b6ebc3
|
Merge branch 'main' into litellm_personal_user_budgets
|
2024-08-07 19:59:50 -07:00 |
|
Ishaan Jaff
|
6331bd6b4f
|
run that ci cd again
|
2024-08-07 18:47:11 -07:00 |
|
Ishaan Jaff
|
08fb9faae5
|
run that ci/cd again
|
2024-08-07 18:25:52 -07:00 |
|
Ishaan Jaff
|
fa0fa13b28
|
fix test for wildcard routing
|
2024-08-07 16:41:00 -07:00 |
|
Krrish Dholakia
|
afd89c8105
|
test(test_completion.py): handle gemini instability
|
2024-08-07 15:58:47 -07:00 |
|
Ishaan Jaff
|
21602ea703
|
ci/cd run again
|
2024-08-07 15:44:54 -07:00 |
|
Ishaan Jaff
|
ee6477e1ac
|
fix - someone resolved a merge conflict badly
|
2024-08-07 15:23:15 -07:00 |
|
Krish Dholakia
|
ab487f0cf9
|
Merge pull request #5079 from BerriAI/litellm_add_pydantic_model_support
feat(utils.py): support passing response_format as pydantic model
|
2024-08-07 14:43:05 -07:00 |
|
Krish Dholakia
|
78d72acc44
|
Merge branch 'main' into litellm_anthropic_streaming_tool_call_fix
|
2024-08-07 14:33:30 -07:00 |
|
Ishaan Jaff
|
da7469296a
|
gemini test skip internal server error
|
2024-08-07 13:12:45 -07:00 |
|
Krrish Dholakia
|
26ad015ccf
|
test: update build requirements
|
2024-08-07 13:09:49 -07:00 |
|
Krish Dholakia
|
77a33baabb
|
Merge branch 'main' into litellm_add_pydantic_model_support
|
2024-08-07 13:07:46 -07:00 |
|
Krrish Dholakia
|
788b06a33c
|
fix(utils.py): support deepseek tool calling
Fixes https://github.com/BerriAI/litellm/issues/5081
|
2024-08-07 11:14:05 -07:00 |
|
Krrish Dholakia
|
bac35c9e47
|
test(test_completion.py): handle internal server error in test
|
2024-08-07 10:21:37 -07:00 |
|
Krrish Dholakia
|
89273722ba
|
fix(bedrock_httpx.py): handle empty arguments returned during tool calling streaming
|
2024-08-07 09:54:50 -07:00 |
|
Krrish Dholakia
|
ec4051592b
|
fix(anthropic.py): handle scenario where anthropic returns invalid json string for tool call while streaming
Fixes https://github.com/BerriAI/litellm/issues/5063
|
2024-08-07 09:24:11 -07:00 |
|
Ishaan Jaff
|
b57efb32d4
|
run ci / cd again
|
2024-08-06 21:35:46 -07:00 |
|
Ishaan Jaff
|
dc7eba5f20
|
ci/cd run again
|
2024-08-06 21:28:22 -07:00 |
|
Krrish Dholakia
|
2b132c6bef
|
feat(utils.py): support passing response_format as pydantic model
Related issue - https://github.com/BerriAI/litellm/issues/5074
|
2024-08-06 18:16:07 -07:00 |
|
Ishaan Jaff
|
455e97482f
|
ci/cd run again
|
2024-08-05 22:33:49 -07:00 |
|
Ishaan Jaff
|
0fd8e5b7be
|
ci/cd run again
|
2024-08-05 21:21:01 -07:00 |
|
Ishaan Jaff
|
59a48b2d0d
|
run ci/cd again
|
2024-08-05 20:04:19 -07:00 |
|
Ishaan Jaff
|
dce7c459f8
|
run ci/cd again
|
2024-08-05 16:52:45 -07:00 |
|
Ishaan Jaff
|
9b7e84ac34
|
ci/cd run again
|
2024-08-03 18:48:10 -07:00 |
|
Krrish Dholakia
|
acbc2917b8
|
feat(utils.py): Add github as a provider
Closes https://github.com/BerriAI/litellm/issues/4922#issuecomment-2266564469
|
2024-08-03 09:11:22 -07:00 |
|
Ishaan Jaff
|
58af500af5
|
ci/cd run again
|
2024-08-02 11:46:45 -07:00 |
|
Krrish Dholakia
|
52057b2482
|
test: handle anthropic rate limit error
|
2024-08-02 08:57:09 -07:00 |
|
Ishaan Jaff
|
44c81acd86
|
ci/cd run again
|
2024-08-01 19:55:12 -07:00 |
|
Ishaan Jaff
|
6f79081ea2
|
ci/cd - anyscale discontinued their API endoints - skip test
|
2024-08-01 17:58:48 -07:00 |
|
Krrish Dholakia
|
6612f3a0d4
|
test: handle predibase api failures
|
2024-07-31 19:39:58 -07:00 |
|
Krrish Dholakia
|
1206b7626a
|
fix(utils.py): return additional kwargs from openai-like response body
Closes https://github.com/BerriAI/litellm/issues/4981
|
2024-07-31 15:37:03 -07:00 |
|