alisalim17
319e006d13
test: add test for function calling with mistral large latest to test_completion.py
2024-04-21 11:27:21 +04:00
Krrish Dholakia
14eb8c374b
test(test_completion.py): skip local test
2024-04-17 19:14:41 -07:00
Krrish Dholakia
18e3cf8bff
fix(utils.py): support azure mistral function calling
2024-04-17 19:10:26 -07:00
Ishaan Jaff
409bd5b4ab
ci/cd run again
2024-04-17 08:01:39 -07:00
Ishaan Jaff
70f1dc2bb9
(ci/cd) run again
2024-04-16 21:44:11 -07:00
Ishaan Jaff
5393930701
fix function calling prompt - ask llm to respond in fahrenheit
2024-04-16 21:09:53 -07:00
Krrish Dholakia
26286a54b8
fix(anthropic_text.py): add support for async text completion calls
2024-04-15 08:15:00 -07:00
Ishaan Jaff
5856ec03c6
(ci/cd) run again
2024-04-12 20:48:26 -07:00
Krrish Dholakia
a311788f0d
test(test_completion.py): handle api instability
2024-04-09 21:58:48 -07:00
Krrish Dholakia
a6b004f10b
test(test_completion.py): change model
2024-04-09 21:38:17 -07:00
Krrish Dholakia
855e7ed9d2
fix(main.py): handle translating text completion openai to chat completion for async requests
...
also adds testing for this, to prevent future regressions
2024-04-09 16:47:49 -07:00
Ishaan Jaff
3d298fc549
(test) completion
2024-04-05 21:03:04 -07:00
Ishaan Jaff
7fc416b636
(ci/cd) run again
2024-04-05 17:26:02 -07:00
Ishaan Jaff
fdadeabe79
fix testing yaml
2024-04-05 16:17:53 -07:00
Ishaan Jaff
cfe358abaa
simplify calling azure/commmand-r-plus
2024-04-05 09:18:11 -07:00
Ishaan Jaff
5d196ff300
test - azure/command-r-plus
2024-04-05 08:56:05 -07:00
Krrish Dholakia
dfcb6bcbc5
test(test_completion.py): skip sagemaker test - aws account suspended
2024-04-04 09:52:24 -07:00
Ishaan Jaff
fa44f45429
(ci/cd) run again
2024-04-03 21:02:08 -07:00
Ishaan Jaff
d627c90bfd
ci/cd run again
2024-04-03 20:13:46 -07:00
Ishaan Jaff
ddb35facc0
ci/cd run again
2024-04-01 07:40:05 -07:00
Krrish Dholakia
49642a5b00
fix(factory.py): parse list in xml tool calling response (anthropic)
...
improves tool calling outparsing to check if list in response. Also returns the raw response back to the user via `response._hidden_params["original_response"]`, so user can see exactly what anthropic returned
2024-03-29 11:51:26 -07:00
Krrish Dholakia
109cd93a39
fix(sagemaker.py): support model_id consistently. support dynamic args for async calls
2024-03-29 09:05:00 -07:00
Krrish Dholakia
d547944556
fix(sagemaker.py): support 'model_id' param for sagemaker
...
allow passing inference component param to sagemaker in the same format as we handle this for bedrock
2024-03-29 08:43:17 -07:00
Krrish Dholakia
9ef7afd2b4
test(test_completion.py): skip unresponsive endpoint
2024-03-27 20:12:22 -07:00
Ishaan Jaff
787c9b7df0
(test) claude-1 api is unstable
2024-03-26 08:07:16 -07:00
Krrish Dholakia
2a9fd4c28d
test(test_completion.py): make default claude 3 test message multi-turn
2024-03-23 14:34:42 -07:00
Krrish Dholakia
9b951b906d
test(test_completion.py): fix claude multi-turn conversation test
2024-03-23 00:56:41 -07:00
Ishaan Jaff
52a5ed410b
(ci/cd) run again
2024-03-18 21:24:24 -07:00
Krish Dholakia
0368a335e6
Merge branch 'main' into support_anthropic_function_result
2024-03-16 09:58:08 -07:00
Zihao Li
91f467f55d
Add tool result submission to claude 3 function call test and claude 3 multi-turn conversion to ensure alternating message roles
2024-03-16 01:40:36 +08:00
Krish Dholakia
32ca306123
Merge pull request #2535 from BerriAI/litellm_fireworks_ai_support
...
feat(utils.py): add native fireworks ai support
2024-03-15 10:02:53 -07:00
Krrish Dholakia
9909f44015
feat(utils.py): add native fireworks ai support
...
addresses - https://github.com/BerriAI/litellm/issues/777 , https://github.com/BerriAI/litellm/issues/2486
2024-03-15 09:09:59 -07:00
ishaan-jaff
7f0cebe756
(ci/cd) check triggers
2024-03-15 08:21:16 -07:00
ishaan-jaff
fd33eda29d
(ci/cd) check linked triggers
2024-03-15 08:17:55 -07:00
ishaan-jaff
82e44e4962
(ci/cd) check actions run
2024-03-14 20:58:22 -07:00
ishaan-jaff
e7240bb5c1
(ci/cd) fix litellm triggers on commits
2024-03-14 20:50:02 -07:00
ishaan-jaff
e3cc0da5f1
(ci/cd) run testing again
2024-03-13 21:47:56 -07:00
Krish Dholakia
0d18f3c0ca
Merge pull request #2473 from BerriAI/litellm_fix_compatible_provider_model_name
...
fix(openai.py): return model name with custom llm provider for openai-compatible endpoints (e.g. mistral, together ai, etc.)
2024-03-12 12:58:29 -07:00
Ishaan Jaff
5172fb1de9
Merge pull request #2474 from BerriAI/litellm_support_command_r
...
[New-Model] Cohere/command-r
2024-03-12 11:11:56 -07:00
ishaan-jaff
aa8b5e9768
(feat) add cohere_chat to model_prices
2024-03-12 10:51:33 -07:00
ishaan-jaff
e5bb65669d
(feat) exception mapping for cohere_chat
2024-03-12 10:45:42 -07:00
ishaan-jaff
f50539ace9
(test) command_r
2024-03-12 10:30:33 -07:00
Krrish Dholakia
0033613b9e
fix(openai.py): return model name with custom llm provider for openai compatible endpoints
2024-03-12 10:30:10 -07:00
ishaan-jaff
223ac464d7
(fix) support streaming for azure/instruct models
2024-03-12 09:50:43 -07:00
ishaan-jaff
b193b01f40
(feat) support azure/gpt-instruct models
2024-03-12 09:30:15 -07:00
ishaan-jaff
c51d25b063
(ci/cd) test
2024-03-09 18:45:27 -08:00
ishaan-jaff
ce19c2aeef
(fix) config.yml
2024-03-09 18:23:57 -08:00
ishaan-jaff
bd340562b8
(fix) use python 3.8 for testing
2024-03-09 16:56:46 -08:00
ishaan-jaff
f70feb1806
(test) name with claude-3
2024-03-08 09:33:54 -08:00
ishaan-jaff
2e130fb662
(test) ci/cd run again
2024-03-06 20:40:27 -08:00