Krrish Dholakia
4c88e917bd
fix(huggingface_restapi.py): fix task extraction from model name
2024-05-15 07:28:19 -07:00
Krish Dholakia
c3b3eac783
Merge pull request #3554 from paneru-rajan/Issue-3544-fix-message
...
Fixes #3544 based on the data-type of message
2024-05-13 16:23:58 -07:00
Krrish Dholakia
96336cdd49
fix(openai.py): creat MistralConfig with response_format mapping for mistral api
2024-05-13 13:29:58 -07:00
Krrish Dholakia
3a356a6f56
fix(vertex_ai.py): support tool call list response async completion
2024-05-13 10:42:31 -07:00
Krish Dholakia
784ae85ba0
Merge branch 'main' into litellm_bedrock_command_r_support
2024-05-11 21:24:42 -07:00
Krrish Dholakia
9b10ba649f
fix(anthropic.py): fix tool calling + streaming issue
2024-05-11 20:15:36 -07:00
Krrish Dholakia
cca89c2061
fix(anthropic_text.py): fix linting error
2024-05-11 20:01:50 -07:00
Krrish Dholakia
8592a73881
fix(bedrock_httpx.py): compatibility fix
2024-05-11 19:55:38 -07:00
Krrish Dholakia
2b1c22f088
fix(anthropic.py): compatibility fix
2024-05-11 19:51:29 -07:00
Krrish Dholakia
cfab989abf
fix(anthropic.py): fix version compatibility
2024-05-11 19:46:26 -07:00
Krrish Dholakia
6018c8ab77
fix(anthropic.py): fix linting error
2024-05-11 19:42:14 -07:00
Krrish Dholakia
68596ced04
feat(bedrock_httpx.py): working bedrock command-r sync+async streaming
2024-05-11 19:39:51 -07:00
Krrish Dholakia
bd0c3a81cb
fix(bedrock_httpx.py): working async bedrock command r calls
2024-05-11 16:45:20 -07:00
Krrish Dholakia
5185580e3d
feat(bedrock_httpx.py): working cohere command r async calls
2024-05-11 15:04:38 -07:00
Krrish Dholakia
926b86af87
feat(bedrock_httpx.py): moves to using httpx client for bedrock cohere calls
2024-05-11 13:43:08 -07:00
Krish Dholakia
7f64c61275
Merge pull request #3582 from BerriAI/litellm_explicit_region_name_setting
...
feat(router.py): allow setting model_region in litellm_params
2024-05-11 11:36:22 -07:00
Krrish Dholakia
691c185ff8
feat(router.py): support region routing for bedrock, vertex ai, watsonx
2024-05-11 11:04:00 -07:00
Krish Dholakia
4f89f0d3a4
Merge pull request #3561 from simonsanvil/feature/watsonx-integration
...
(fix) Fixed linting and other bugs with watsonx provider
2024-05-11 09:56:02 -07:00
Krish Dholakia
8ab9c861c9
Merge pull request #3369 from mogith-pn/main
...
Clarifai-LiteLLM : Added clarifai as LLM Provider.
2024-05-11 09:31:46 -07:00
Krish Dholakia
997ef2e480
Merge pull request #3507 from Manouchehri/oidc-3505-part-1
...
Initial OIDC support (Google/GitHub/CircleCI -> Amazon Bedrock & Azure OpenAI)
2024-05-11 09:25:17 -07:00
Krrish Dholakia
e12fa09489
fix: fix linting
2024-05-10 20:23:33 -07:00
Krrish Dholakia
bed16d5495
fix(vertex_ai.py): fix list tool call responses
...
Closes https://github.com/BerriAI/litellm/issues/3147
2024-05-10 20:05:58 -07:00
Ishaan Jaff
b02f633cd6
Merge pull request #3577 from BerriAI/litellm_add_triton_server
...
[Feat] Add Triton Embeddings to LiteLLM
2024-05-10 19:20:23 -07:00
Ishaan Jaff
5eca68d504
feat - triton embeddings
2024-05-10 18:57:06 -07:00
Krrish Dholakia
1b5a9a5c80
docs(huggingface.md): add text-classification to huggingface docs
2024-05-10 14:39:14 -07:00
Krrish Dholakia
4680f4e1db
test(test_completion.py): reintegrate testing for huggingface tgi + non-tgi
2024-05-10 14:07:01 -07:00
Rajan Paneru
d1cf981663
Preserving the Pydantic Message Object
...
Following statement replaces the Pydantic Message Object and initialize it with the dict
model_response["choices"][0]["message"] = response_json["message"]
We need to make sure message is always litellm.Message object
As a fix, based on the code of ollama.py file, i am updating just the content intead of entire object for both sync and async functions
2024-05-10 22:12:32 +09:30
Simon Sanchez Viloria
4e267fdaef
Merge branch 'main' into feature/watsonx-integration
2024-05-10 12:09:09 +02:00
Simon Sanchez Viloria
8e61b707c3
(fix) watsonx.py: Fixed linting errors and make sure stream chunk always return usage
2024-05-10 11:53:33 +02:00
Krish Dholakia
ddf09a3193
Merge pull request #3552 from BerriAI/litellm_predibase_support
...
feat(predibase.py): add support for predibase provider
2024-05-09 22:21:16 -07:00
Krrish Dholakia
910c2cd36a
fix(predibase.py): fix async streaming
2024-05-09 22:18:16 -07:00
Krrish Dholakia
30136493b4
fix(predibase.py): fix event loop closed error
2024-05-09 19:07:19 -07:00
Krrish Dholakia
61fc042f9b
fix(predibase.py): fix async completion call
2024-05-09 18:44:19 -07:00
Ishaan Jaff
a9aa71de01
Merge pull request #3547 from BerriAI/litellm_support_stream_options_text_completion
...
[Feat] support `stream_options` on `litellm.text_completion`
2024-05-09 18:05:58 -07:00
Krrish Dholakia
ef72f25ab6
fix: fix linting errors
2024-05-09 17:55:27 -07:00
Krrish Dholakia
7c0ab40fd5
feat(predibase.py): support async_completion + streaming (sync + async)
...
finishes up pr
2024-05-09 17:41:27 -07:00
Krrish Dholakia
f660d21743
feat(predibase.py): add support for predibase provider
...
Closes https://github.com/BerriAI/litellm/issues/1253
2024-05-09 16:39:43 -07:00
Ishaan Jaff
953805aac5
stream_options for text-completionopenai
2024-05-09 08:37:40 -07:00
Ishaan Jaff
2968737969
Merge pull request #3537 from BerriAI/litellm_support_stream_options_param
...
[Feat] support `stream_options` param for OpenAI
2024-05-09 08:34:08 -07:00
Krish Dholakia
8af4596dad
Revert "Add support for async streaming to watsonx provider "
2024-05-09 07:44:15 -07:00
Krish Dholakia
64ca2fde53
Merge branch 'main' into litellm_region_based_routing
2024-05-08 22:19:51 -07:00
Krish Dholakia
ffe255ea2b
Merge pull request #3479 from simonsanvil/feature/watsonx-integration
...
Add support for async streaming to watsonx provider
2024-05-08 22:19:05 -07:00
Krrish Dholakia
0ea8222508
feat(router.py): enable filtering model group by 'allowed_model_region'
2024-05-08 22:10:17 -07:00
Ishaan Jaff
1b9da4a533
support stream_options for chat completion models
2024-05-08 21:52:25 -07:00
Ishaan Jaff
eeece6e90b
Merge branch 'main' into ui_show_spend_end_user
2024-05-08 18:29:25 -07:00
Ishaan Jaff
9881dbb3a1
update global/spend/end_users
2024-05-08 17:03:09 -07:00
Krrish Dholakia
5f93cae3ff
feat(proxy_server.py): return litellm version in response headers
2024-05-08 16:00:08 -07:00
Krish Dholakia
1dccaba550
Revert "* feat(factory.py): add support for merging consecutive messages of one role when separated with empty message of another role"
2024-05-07 21:42:18 -07:00
Krish Dholakia
d8c1987576
Merge pull request #3478 from nkvch/Issue-#3474-anthropic-roles-alternation-issue
...
* feat(factory.py): add support for merging consecutive messages of one role when separated with empty message of another role
2024-05-07 21:24:47 -07:00
Ishaan Jaff
d399947111
Merge pull request #3470 from mbektas/fix-ollama-embeddings
...
support sync ollama embeddings
2024-05-07 19:21:37 -07:00