Commit graph

552 commits

Author SHA1 Message Date
Krish Dholakia
69280177a3
Merge pull request #3308 from BerriAI/litellm_fix_streaming_n
fix(utils.py): fix the response object returned when n>1 for stream=true
2024-04-25 18:36:54 -07:00
Krrish Dholakia
9f5ba67f5d fix(utils.py): return logprobs as an object not dict 2024-04-25 17:55:18 -07:00
Krrish Dholakia
caf1e28ba3 test(test_completion.py): fix test 2024-04-25 14:07:07 -07:00
Krrish Dholakia
4f46b4c397 fix(factory.py): add replicate meta llama prompt templating support 2024-04-25 08:25:00 -07:00
Ishaan Jaff
74817c560e (ci/cd) run again 2024-04-24 23:23:14 -07:00
Ishaan Jaff
13e0ac64ef (fix) updating router settings 2024-04-24 23:09:25 -07:00
Ishaan Jaff
242830108c (ci/cd) run again 2024-04-24 21:09:49 -07:00
Krish Dholakia
8d2e411df6
Merge pull request #3124 from elisalimli/bugfix/add-missing-tool-calls-mistral-messages
Add missing tool_calls and name to messages
2024-04-23 17:25:12 -07:00
alisalim17
319e006d13 test: add test for function calling with mistral large latest to test_completion.py 2024-04-21 11:27:21 +04:00
Ishaan Jaff
81e4d59357 test - gemini-1.5-pro-latest 2024-04-19 21:22:19 -07:00
Ishaan Jaff
cb053398eb (ci/cd) run again 2024-04-18 21:04:07 -07:00
Ishaan Jaff
8c830e031d (ci/cd) run again 2024-04-18 20:35:21 -07:00
Krrish Dholakia
14eb8c374b test(test_completion.py): skip local test 2024-04-17 19:14:41 -07:00
Krrish Dholakia
18e3cf8bff fix(utils.py): support azure mistral function calling 2024-04-17 19:10:26 -07:00
Ishaan Jaff
409bd5b4ab ci/cd run again 2024-04-17 08:01:39 -07:00
Ishaan Jaff
70f1dc2bb9 (ci/cd) run again 2024-04-16 21:44:11 -07:00
Ishaan Jaff
5393930701 fix function calling prompt - ask llm to respond in fahrenheit 2024-04-16 21:09:53 -07:00
Krrish Dholakia
26286a54b8 fix(anthropic_text.py): add support for async text completion calls 2024-04-15 08:15:00 -07:00
Ishaan Jaff
5856ec03c6 (ci/cd) run again 2024-04-12 20:48:26 -07:00
Krrish Dholakia
a311788f0d test(test_completion.py): handle api instability 2024-04-09 21:58:48 -07:00
Krrish Dholakia
a6b004f10b test(test_completion.py): change model 2024-04-09 21:38:17 -07:00
Krrish Dholakia
855e7ed9d2 fix(main.py): handle translating text completion openai to chat completion for async requests
also adds testing for this, to prevent future regressions
2024-04-09 16:47:49 -07:00
Ishaan Jaff
3d298fc549 (test) completion 2024-04-05 21:03:04 -07:00
Ishaan Jaff
7fc416b636 (ci/cd) run again 2024-04-05 17:26:02 -07:00
Ishaan Jaff
fdadeabe79 fix testing yaml 2024-04-05 16:17:53 -07:00
Ishaan Jaff
cfe358abaa simplify calling azure/commmand-r-plus 2024-04-05 09:18:11 -07:00
Ishaan Jaff
5d196ff300 test - azure/command-r-plus 2024-04-05 08:56:05 -07:00
Krrish Dholakia
dfcb6bcbc5 test(test_completion.py): skip sagemaker test - aws account suspended 2024-04-04 09:52:24 -07:00
Ishaan Jaff
fa44f45429 (ci/cd) run again 2024-04-03 21:02:08 -07:00
Ishaan Jaff
d627c90bfd ci/cd run again 2024-04-03 20:13:46 -07:00
Ishaan Jaff
ddb35facc0 ci/cd run again 2024-04-01 07:40:05 -07:00
Krrish Dholakia
49642a5b00 fix(factory.py): parse list in xml tool calling response (anthropic)
improves tool calling outparsing to check if list in response. Also returns the raw response back to the user via `response._hidden_params["original_response"]`, so user can see exactly what anthropic returned
2024-03-29 11:51:26 -07:00
Krrish Dholakia
109cd93a39 fix(sagemaker.py): support model_id consistently. support dynamic args for async calls 2024-03-29 09:05:00 -07:00
Krrish Dholakia
d547944556 fix(sagemaker.py): support 'model_id' param for sagemaker
allow passing inference component param to sagemaker in the same format as we handle this for bedrock
2024-03-29 08:43:17 -07:00
Krrish Dholakia
9ef7afd2b4 test(test_completion.py): skip unresponsive endpoint 2024-03-27 20:12:22 -07:00
Ishaan Jaff
787c9b7df0 (test) claude-1 api is unstable 2024-03-26 08:07:16 -07:00
Krrish Dholakia
2a9fd4c28d test(test_completion.py): make default claude 3 test message multi-turn 2024-03-23 14:34:42 -07:00
Krrish Dholakia
9b951b906d test(test_completion.py): fix claude multi-turn conversation test 2024-03-23 00:56:41 -07:00
Ishaan Jaff
52a5ed410b (ci/cd) run again 2024-03-18 21:24:24 -07:00
Krish Dholakia
0368a335e6
Merge branch 'main' into support_anthropic_function_result 2024-03-16 09:58:08 -07:00
Zihao Li
91f467f55d Add tool result submission to claude 3 function call test and claude 3 multi-turn conversion to ensure alternating message roles 2024-03-16 01:40:36 +08:00
Krish Dholakia
32ca306123
Merge pull request #2535 from BerriAI/litellm_fireworks_ai_support
feat(utils.py): add native fireworks ai support
2024-03-15 10:02:53 -07:00
Krrish Dholakia
9909f44015 feat(utils.py): add native fireworks ai support
addresses - https://github.com/BerriAI/litellm/issues/777, https://github.com/BerriAI/litellm/issues/2486
2024-03-15 09:09:59 -07:00
ishaan-jaff
7f0cebe756 (ci/cd) check triggers 2024-03-15 08:21:16 -07:00
ishaan-jaff
fd33eda29d (ci/cd) check linked triggers 2024-03-15 08:17:55 -07:00
ishaan-jaff
82e44e4962 (ci/cd) check actions run 2024-03-14 20:58:22 -07:00
ishaan-jaff
e7240bb5c1 (ci/cd) fix litellm triggers on commits 2024-03-14 20:50:02 -07:00
ishaan-jaff
e3cc0da5f1 (ci/cd) run testing again 2024-03-13 21:47:56 -07:00
Krish Dholakia
0d18f3c0ca
Merge pull request #2473 from BerriAI/litellm_fix_compatible_provider_model_name
fix(openai.py): return model name with custom llm provider for openai-compatible endpoints (e.g. mistral, together ai, etc.)
2024-03-12 12:58:29 -07:00
Ishaan Jaff
5172fb1de9
Merge pull request #2474 from BerriAI/litellm_support_command_r
[New-Model] Cohere/command-r
2024-03-12 11:11:56 -07:00