Ishaan Jaff
|
a731e00c6e
|
Merge pull request #3462 from ffreemt/main
Add return_exceptions to batch_completion (retry)
|
2024-05-24 09:19:10 -07:00 |
|
ffreemt
|
ae6834e97a
|
Make return-exceptions as default behavior in litellm.batch_completion
|
2024-05-24 11:09:11 +08:00 |
|
Krrish Dholakia
|
e3c5e004c5
|
feat(databricks.py): add embedding model support
|
2024-05-23 18:22:03 -07:00 |
|
Krrish Dholakia
|
143a44823a
|
feat(databricks.py): adds databricks support - completion, async, streaming
Closes https://github.com/BerriAI/litellm/issues/2160
|
2024-05-23 16:29:46 -07:00 |
|
Krrish Dholakia
|
4795c56f84
|
feat(anthropic.py): support anthropic 'tool_choice' param
Closes https://github.com/BerriAI/litellm/issues/3752
|
2024-05-21 17:50:44 -07:00 |
|
Ishaan Jaff
|
76a1444621
|
add ImageObject
|
2024-05-20 10:45:37 -07:00 |
|
Ishaan Jaff
|
884e2beed6
|
feat - working httpx requests vertex ai image gen
|
2024-05-20 09:51:15 -07:00 |
|
Krrish Dholakia
|
56084d5ac1
|
fix(bedrock_httpx.py): support mapping for bedrock cohere command r text
|
2024-05-17 16:13:49 -07:00 |
|
Krrish Dholakia
|
86ece7d8b5
|
fix(main.py): fix async stream handling during bedrock error
|
2024-05-16 23:37:59 -07:00 |
|
Krrish Dholakia
|
13e4196e3e
|
fix(bedrock_httpx.py): add async support for bedrock amazon, meta, mistral models
|
2024-05-16 22:39:25 -07:00 |
|
Krrish Dholakia
|
8409b39f0d
|
fix(bedrock_httpx.py): move bedrock ai21 calls to being async
|
2024-05-16 22:21:30 -07:00 |
|
Krrish Dholakia
|
118fc4ffac
|
fix(bedrock_httpx.py): move anthropic bedrock calls to httpx
Fixing https://github.com/BerriAI/litellm/issues/2921
|
2024-05-16 21:51:55 -07:00 |
|
Krrish Dholakia
|
e41897808d
|
fix(replicate.py): move replicate calls to being completely async
Closes https://github.com/BerriAI/litellm/issues/3128
|
2024-05-16 17:24:08 -07:00 |
|
Ishaan Jaff
|
4be6dd7a73
|
Merge pull request #3694 from BerriAI/litellm_allow_setting_anthropic_beta
[Feat] Support Anthropic `tools-2024-05-16` - Set Custom Anthropic Custom Headers
|
2024-05-16 15:48:26 -07:00 |
|
Ishaan Jaff
|
176630bce9
|
feat use OpenAI extra_headers param
|
2024-05-16 14:38:17 -07:00 |
|
Krrish Dholakia
|
93dd54be6d
|
test: fix test
|
2024-05-15 08:51:40 -07:00 |
|
Krrish Dholakia
|
f194b23d0d
|
fix(main.py): testing fix
|
2024-05-15 08:23:00 -07:00 |
|
Edwin Jose George
|
91ee911cb4
|
fix: custom_llm_provider needs to be set before setting timeout
|
2024-05-15 22:36:15 +09:30 |
|
Krrish Dholakia
|
844c528fa4
|
refactor(main.py): trigger new build
|
2024-05-14 22:46:44 -07:00 |
|
Krrish Dholakia
|
b6a9995af6
|
refactor(main.py): trigger new build
|
2024-05-14 22:17:40 -07:00 |
|
Krrish Dholakia
|
a09892f3a4
|
refactor(main.py): trigger new build
|
2024-05-14 19:52:23 -07:00 |
|
Krrish Dholakia
|
d8ecda3310
|
fix(main.py): ignore model_config param
|
2024-05-14 19:03:17 -07:00 |
|
Krrish Dholakia
|
8033bf343f
|
test(test_completion.py): handle async watsonx call fail
|
2024-05-13 18:40:51 -07:00 |
|
Krrish Dholakia
|
87a21115c5
|
test: handle watsonx rate limit error
|
2024-05-13 18:27:39 -07:00 |
|
Krrish Dholakia
|
d3e26db104
|
refactor(main.py): trigger new build
|
2024-05-13 18:12:01 -07:00 |
|
Krrish Dholakia
|
ca641d0a24
|
fix(utils.py): handle api assistant returning 'null' role
Fixes https://github.com/BerriAI/litellm/issues/3621
|
2024-05-13 16:46:07 -07:00 |
|
Krrish Dholakia
|
8d94665842
|
fix(utils.py): fix custom pricing when litellm model != response obj model name
|
2024-05-13 15:25:35 -07:00 |
|
Krrish Dholakia
|
c5ca2619f9
|
refactor(main.py): trigger new build
|
2024-05-11 22:53:09 -07:00 |
|
Krish Dholakia
|
784ae85ba0
|
Merge branch 'main' into litellm_bedrock_command_r_support
|
2024-05-11 21:24:42 -07:00 |
|
Krrish Dholakia
|
68596ced04
|
feat(bedrock_httpx.py): working bedrock command-r sync+async streaming
|
2024-05-11 19:39:51 -07:00 |
|
Krrish Dholakia
|
5185580e3d
|
feat(bedrock_httpx.py): working cohere command r async calls
|
2024-05-11 15:04:38 -07:00 |
|
Krrish Dholakia
|
926b86af87
|
feat(bedrock_httpx.py): moves to using httpx client for bedrock cohere calls
|
2024-05-11 13:43:08 -07:00 |
|
Krish Dholakia
|
8ab9c861c9
|
Merge pull request #3369 from mogith-pn/main
Clarifai-LiteLLM : Added clarifai as LLM Provider.
|
2024-05-11 09:31:46 -07:00 |
|
Krrish Dholakia
|
95f6994d20
|
refactor(main.py): trigger new build
|
2024-05-10 20:17:39 -07:00 |
|
Ishaan Jaff
|
5eca68d504
|
feat - triton embeddings
|
2024-05-10 18:57:06 -07:00 |
|
Krrish Dholakia
|
62ba6f20f1
|
test(test_router_fallbacks.py): fix test
|
2024-05-10 09:58:40 -07:00 |
|
Krrish Dholakia
|
03139e1769
|
fix(main.py): support env var 'VERTEX_PROJECT' and 'VERTEX_LOCATION'
|
2024-05-10 07:57:56 -07:00 |
|
Krish Dholakia
|
ddf09a3193
|
Merge pull request #3552 from BerriAI/litellm_predibase_support
feat(predibase.py): add support for predibase provider
|
2024-05-09 22:21:16 -07:00 |
|
Krrish Dholakia
|
a053b3b43e
|
fix(main.py): fix linting error
|
2024-05-09 18:12:28 -07:00 |
|
Ishaan Jaff
|
a9aa71de01
|
Merge pull request #3547 from BerriAI/litellm_support_stream_options_text_completion
[Feat] support `stream_options` on `litellm.text_completion`
|
2024-05-09 18:05:58 -07:00 |
|
Krrish Dholakia
|
7c0ab40fd5
|
feat(predibase.py): support async_completion + streaming (sync + async)
finishes up pr
|
2024-05-09 17:41:27 -07:00 |
|
Krrish Dholakia
|
f660d21743
|
feat(predibase.py): add support for predibase provider
Closes https://github.com/BerriAI/litellm/issues/1253
|
2024-05-09 16:39:43 -07:00 |
|
Krrish Dholakia
|
c87d33d2ad
|
refactor(main.py): trigger new build
|
2024-05-09 15:41:33 -07:00 |
|
Ishaan Jaff
|
752d771507
|
add stream_options to text_completion
|
2024-05-09 08:35:35 -07:00 |
|
Ishaan Jaff
|
2968737969
|
Merge pull request #3537 from BerriAI/litellm_support_stream_options_param
[Feat] support `stream_options` param for OpenAI
|
2024-05-09 08:34:08 -07:00 |
|
Ishaan Jaff
|
4e4c214984
|
fix completion vs acompletion params
|
2024-05-09 07:59:37 -07:00 |
|
Krish Dholakia
|
8af4596dad
|
Revert "Add support for async streaming to watsonx provider "
|
2024-05-09 07:44:15 -07:00 |
|
Krish Dholakia
|
64ca2fde53
|
Merge branch 'main' into litellm_region_based_routing
|
2024-05-08 22:19:51 -07:00 |
|
Krish Dholakia
|
ffe255ea2b
|
Merge pull request #3479 from simonsanvil/feature/watsonx-integration
Add support for async streaming to watsonx provider
|
2024-05-08 22:19:05 -07:00 |
|
Krrish Dholakia
|
0ea8222508
|
feat(router.py): enable filtering model group by 'allowed_model_region'
|
2024-05-08 22:10:17 -07:00 |
|