Ishaan Jaff
6fd6490d63
fix hide - _auto_infer_region behind a feature flag
2024-05-10 12:38:06 -07:00
Ishaan Jaff
9d3f01c6ae
fix - router add model logic
2024-05-10 12:32:16 -07:00
Krrish Dholakia
781d5888c3
docs(predibase.md): add support for predibase to docs
2024-05-10 10:58:35 -07:00
Krish Dholakia
8a35354dd6
Merge pull request #3378 from duckboy81/patch-1
...
Expand access for other jwt algorithms
2024-05-10 10:07:36 -07:00
Krrish Dholakia
cdec7a414f
test(test_router_fallbacks.py): fix test
2024-05-10 09:58:40 -07:00
Krrish Dholakia
40e19a838c
bump: version 1.37.1 → 1.37.2
2024-05-10 08:40:31 -07:00
Krrish Dholakia
f9a0364bff
bump: version 1.37.0 → 1.37.1
2024-05-10 08:34:01 -07:00
Krrish Dholakia
9a31f3d3d9
fix(main.py): support env var 'VERTEX_PROJECT' and 'VERTEX_LOCATION'
2024-05-10 07:57:56 -07:00
Rajan Paneru
65b07bcb8c
Preserving the Pydantic Message Object
...
Following statement replaces the Pydantic Message Object and initialize it with the dict
model_response["choices"][0]["message"] = response_json["message"]
We need to make sure message is always litellm.Message object
As a fix, based on the code of ollama.py file, i am updating just the content intead of entire object for both sync and async functions
2024-05-10 22:12:32 +09:30
Rajan Paneru
8eb842dcf5
revered the patch so that the fix can be applied in the main place
2024-05-10 22:04:44 +09:30
Simon Sanchez Viloria
e1372de9ee
Merge branch 'main' into feature/watsonx-integration
2024-05-10 12:09:09 +02:00
Simon Sanchez Viloria
d3d82827ed
(test) Add tests for WatsonX completion/acompletion streaming
2024-05-10 11:55:58 +02:00
Simon Sanchez Viloria
170fd11c82
(fix) watsonx.py: Fixed linting errors and make sure stream chunk always return usage
2024-05-10 11:53:33 +02:00
Krish Dholakia
a671046b45
Merge pull request #3552 from BerriAI/litellm_predibase_support
...
feat(predibase.py): add support for predibase provider
2024-05-09 22:21:16 -07:00
Krrish Dholakia
714370956f
fix(predibase.py): fix async streaming
2024-05-09 22:18:16 -07:00
Krrish Dholakia
76d4290591
fix(predibase.py): fix event loop closed error
2024-05-09 19:07:19 -07:00
Krrish Dholakia
491e177348
fix(predibase.py): fix async completion call
2024-05-09 18:44:19 -07:00
Krrish Dholakia
5a38438c3f
docs(customer_routing.md): add region-based routing for specific customers, to docs
2024-05-09 18:40:49 -07:00
Krrish Dholakia
425efc60f4
fix(main.py): fix linting error
2024-05-09 18:12:28 -07:00
Ishaan Jaff
5eb12e30cc
Merge pull request #3547 from BerriAI/litellm_support_stream_options_text_completion
...
[Feat] support `stream_options` on `litellm.text_completion`
2024-05-09 18:05:58 -07:00
Ishaan Jaff
63bfc12a63
Merge pull request #3555 from CyanideByte/global-ignore-warning
...
Globally filtering pydantic conflict warnings
2024-05-09 18:04:08 -07:00
Krrish Dholakia
9083d8e490
fix: fix linting errors
2024-05-09 17:55:27 -07:00
CyanideByte
4a7be9163b
Globally filtering pydantic conflict warnings
2024-05-09 17:42:19 -07:00
Krrish Dholakia
d7189c21fd
feat(predibase.py): support async_completion + streaming (sync + async)
...
finishes up pr
2024-05-09 17:41:27 -07:00
Krish Dholakia
dab176b7e7
Merge pull request #3551 from powerhouseofthecell/fix/error-on-get-user-role
...
Fix/error on get user role
2024-05-09 17:40:18 -07:00
Rajan Paneru
c45085b728
Based on the data-type using json
...
The value of response_obj["choices"][0]["message"] is Message object and dict
Added a conditional to use .json only iff it is Message Object
2024-05-10 10:06:50 +09:30
Krrish Dholakia
186c0ec77b
feat(predibase.py): add support for predibase provider
...
Closes https://github.com/BerriAI/litellm/issues/1253
2024-05-09 16:39:43 -07:00
Nick Wong
d3a228d03b
added changes from upstream
...
Merge branch 'main' into fix/error-on-get-user-role
2024-05-09 16:14:14 -07:00
Nick Wong
c42f1ce2c6
removed extra default dict return, which causes error if user_role is a string
2024-05-09 16:13:26 -07:00
dependabot[bot]
9bcd93178f
build(deps): bump next from 14.1.0 to 14.1.1 in /ui/litellm-dashboard
...
Bumps [next](https://github.com/vercel/next.js ) from 14.1.0 to 14.1.1.
- [Release notes](https://github.com/vercel/next.js/releases )
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js )
- [Commits](https://github.com/vercel/next.js/compare/v14.1.0...v14.1.1 )
---
updated-dependencies:
- dependency-name: next
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
2024-05-09 22:42:27 +00:00
Krrish Dholakia
43b2050cc2
bump: version 1.36.4 → 1.37.0
2024-05-09 15:41:40 -07:00
Krrish Dholakia
5c6a382d3b
refactor(main.py): trigger new build
2024-05-09 15:41:33 -07:00
Krrish Dholakia
acb615957d
fix(utils.py): change error log to be debug
2024-05-09 13:58:45 -07:00
Krrish Dholakia
c4295e1667
test(test_least_busy_routing.py): avoid deployments with low rate limits
2024-05-09 13:54:24 -07:00
Krrish Dholakia
927d36148f
feat(proxy_server.py): expose new /team/list
endpoint
...
Closes https://github.com/BerriAI/litellm/issues/3523
2024-05-09 13:21:00 -07:00
Krrish Dholakia
e3f25a4a1f
fix(auth_checks.py): fix 'get_end_user_object'
...
await cache get
2024-05-09 13:05:56 -07:00
Ishaan Jaff
bf99311f5c
fix load test length
2024-05-09 12:50:24 -07:00
Ishaan Jaff
c8662234c7
fix docker run command on release notes
2024-05-09 12:38:38 -07:00
Ishaan Jaff
50b4167a27
fix interpret load test
2024-05-09 12:30:35 -07:00
Ishaan Jaff
30b7e9f776
temp fix laod test
2024-05-09 12:28:14 -07:00
Ishaan Jaff
84b2af8e6c
fix show docker run on repo
2024-05-09 12:27:44 -07:00
Ishaan Jaff
6634ea37e9
fix TextCompletionStreamWrapper
2024-05-09 09:54:44 -07:00
Ishaan Jaff
e0b1eff1eb
feat - support stream_options for text completion
2024-05-09 08:42:25 -07:00
Ishaan Jaff
a29fcc057b
test - stream_options on OpenAI text_completion
2024-05-09 08:41:31 -07:00
Ishaan Jaff
66053f14ae
stream_options for text-completionopenai
2024-05-09 08:37:40 -07:00
Ishaan Jaff
4d5b4a5293
add stream_options to text_completion
2024-05-09 08:35:35 -07:00
Ishaan Jaff
0b1885ca99
Merge pull request #3537 from BerriAI/litellm_support_stream_options_param
...
[Feat] support `stream_options` param for OpenAI
2024-05-09 08:34:08 -07:00
Krrish Dholakia
4cfd988529
fix(get_api_base): fix get_api_base to handle model with alias
2024-05-09 08:01:17 -07:00
Ishaan Jaff
dfd6361310
fix completion vs acompletion params
2024-05-09 07:59:37 -07:00
Krish Dholakia
4c8787f896
Merge pull request #3546 from BerriAI/revert-3479-feature/watsonx-integration
...
Revert "Add support for async streaming to watsonx provider "
2024-05-09 07:44:32 -07:00