Krrish Dholakia
|
09b7235b31
|
fix: support info level logging on pkg + proxy
|
2024-01-20 17:45:47 -08:00 |
|
Krrish Dholakia
|
b07677c6be
|
fix(gemini.py): support streaming
|
2024-01-19 20:21:34 -08:00 |
|
Krrish Dholakia
|
f2a8ceddc2
|
fix(utils.py): revert exception mapping change
|
2024-01-19 17:39:35 -08:00 |
|
Krrish Dholakia
|
f05aba1f85
|
fix(utils.py): add metadata to logging obj on setup, if exists
|
2024-01-19 17:29:47 -08:00 |
|
ishaan-jaff
|
6a695477ba
|
(fix) async langfuse logger
|
2024-01-19 10:44:51 -08:00 |
|
ishaan-jaff
|
f2cfb76920
|
(fix) use asyncio run_in_executor
|
2024-01-19 09:52:51 -08:00 |
|
ishaan-jaff
|
a9c5b02303
|
(v0) fix
|
2024-01-19 08:51:14 -08:00 |
|
ishaan-jaff
|
697c511e76
|
(feat) support user param for all providers
|
2024-01-18 17:45:59 -08:00 |
|
ishaan-jaff
|
debef7544d
|
(feat) return Azure enahncements used
|
2024-01-17 18:46:41 -08:00 |
|
Krrish Dholakia
|
08b409bae8
|
fix(utils.py): fix if check
|
2024-01-17 17:17:58 -08:00 |
|
Krrish Dholakia
|
7ed4d9b4d1
|
fix(utils.py): allow dynamically setting boto3 init and switching between bedrock and openai
|
2024-01-17 15:56:30 -08:00 |
|
Krrish Dholakia
|
8e9dc09955
|
fix(bedrock.py): add support for sts based boto3 initialization
https://github.com/BerriAI/litellm/issues/1476
|
2024-01-17 12:08:59 -08:00 |
|
Krrish Dholakia
|
7b39aacadf
|
fix(utils.py): mistral optional param mapping
|
2024-01-17 09:44:21 -08:00 |
|
ishaan-jaff
|
00ac18e8b7
|
(feat) improve bedrock, sagemaker exception mapping
|
2024-01-15 21:22:22 -08:00 |
|
ishaan-jaff
|
fcc1e23a05
|
(fix) post_call rules
|
2024-01-15 20:56:25 -08:00 |
|
ishaan-jaff
|
e864c78d15
|
(feat) post call rules - fail with error message
|
2024-01-15 17:13:13 -08:00 |
|
ishaan-jaff
|
79ad63009e
|
(feat) support extra body for Azure, OpenAI
|
2024-01-13 14:32:11 -08:00 |
|
ishaan-jaff
|
6bae534968
|
(fix) check if custom_llm_provider is not None
|
2024-01-13 12:54:03 -08:00 |
|
ishaan-jaff
|
53fd62b0cd
|
(feat) use custom_llm_provider in completion_cost
|
2024-01-13 12:29:51 -08:00 |
|
ishaan-jaff
|
6b2a4714a6
|
(feat) return custom_llm_provider in streaming response
|
2024-01-12 17:14:43 -08:00 |
|
David Leen
|
a674de8f36
|
improve bedrock exception granularity
|
2024-01-12 16:38:55 +01:00 |
|
Ishaan Jaff
|
d181bd22a7
|
Merge pull request #1422 from dleen/httpx
(fix) create httpx.Request instead of httpx.request
|
2024-01-11 22:31:55 +05:30 |
|
David Leen
|
6b87c13b9d
|
(fix) create httpx.Request instead of httpx.request
fixes #1420
|
2024-01-11 16:22:26 +01:00 |
|
ishaan-jaff
|
1fb3547e48
|
(feat) improve litellm verbose logs
|
2024-01-11 18:13:08 +05:30 |
|
ishaan-jaff
|
f297a4d174
|
(feat) show args passed to litellm.completion, acompletion on call
|
2024-01-11 17:56:27 +05:30 |
|
Ishaan Jaff
|
2433d6c613
|
Merge pull request #1200 from MateoCamara/explicit-args-acomplete
feat: added explicit args to acomplete
|
2024-01-11 10:39:05 +05:30 |
|
ishaan-jaff
|
f61d8596e1
|
(fix) working s3 logging
|
2024-01-11 08:57:32 +05:30 |
|
Krrish Dholakia
|
3080f27b54
|
fix(utils.py): raise correct error for azure content blocked error
|
2024-01-10 23:31:51 +05:30 |
|
Mateo Cámara
|
203089e6c7
|
Merge branch 'main' into explicit-args-acomplete
|
2024-01-09 13:07:37 +01:00 |
|
Ishaan Jaff
|
4cfa010dbd
|
Merge pull request #1381 from BerriAI/litellm_content_policy_violation_exception
[Feat] Add litellm.ContentPolicyViolationError
|
2024-01-09 17:18:29 +05:30 |
|
ishaan-jaff
|
248e5f3d92
|
(chore) remove deprecated completion_with_config() tests
|
2024-01-09 17:13:06 +05:30 |
|
ishaan-jaff
|
186fc4614d
|
(feat) add ContentPolicyViolationError for azure
|
2024-01-09 16:58:09 +05:30 |
|
ishaan-jaff
|
9da61bdf31
|
(fix) ContentPolicyViolationError
|
2024-01-09 16:53:15 +05:30 |
|
Mateo Cámara
|
bb06c51ede
|
Added test to check if acompletion is using the same parameters as CompletionRequest attributes. Added functools to client decorator to expose acompletion parameters from outside.
|
2024-01-09 12:06:49 +01:00 |
|
ishaan-jaff
|
09874cc83f
|
(v0) add ContentPolicyViolationError
|
2024-01-09 16:33:03 +05:30 |
|
ishaan-jaff
|
5f2cbfc711
|
(feat) litellm.completion - support ollama timeout
|
2024-01-09 10:34:41 +05:30 |
|
Krrish Dholakia
|
dd78782133
|
fix(utils.py): error handling for litellm --model mistral edge case
|
2024-01-08 15:09:01 +05:30 |
|
Krrish Dholakia
|
6333fbfe56
|
fix(main.py): support cost calculation for text completion streaming object
|
2024-01-08 12:41:43 +05:30 |
|
Krrish Dholakia
|
9b46412279
|
fix(utils.py): fix logging for text completion streaming
|
2024-01-08 12:05:28 +05:30 |
|
Krrish Dholakia
|
c04fa54d19
|
fix(utils.py): fix exception raised
|
2024-01-08 07:42:17 +05:30 |
|
Krrish Dholakia
|
3469b5b911
|
fix(utils.py): map optional params for gemini
|
2024-01-08 07:38:55 +05:30 |
|
Krrish Dholakia
|
75177c2a15
|
bump: version 1.16.16 → 1.16.17
|
2024-01-08 07:16:37 +05:30 |
|
Krish Dholakia
|
439ee3bafc
|
Merge pull request #1344 from BerriAI/litellm_speed_improvements
Litellm speed improvements
|
2024-01-06 22:38:10 +05:30 |
|
Krrish Dholakia
|
5fd2f945f3
|
fix(factory.py): support gemini-pro-vision on google ai studio
https://github.com/BerriAI/litellm/issues/1329
|
2024-01-06 22:36:22 +05:30 |
|
Krrish Dholakia
|
712f89b4f1
|
fix(utils.py): handle original_response being a json
|
2024-01-06 17:02:50 +05:30 |
|
ishaan-jaff
|
4679c7b99a
|
(fix) caching use same "created" in response_object
|
2024-01-05 16:03:56 +05:30 |
|
ishaan-jaff
|
00b001b96b
|
(feat) completion_cost: improve model=None error
|
2024-01-05 15:26:04 +05:30 |
|
ishaan-jaff
|
f681f0f2b2
|
(feat) completion_cost - embeddings + raise Exception
|
2024-01-05 13:11:23 +05:30 |
|
Krrish Dholakia
|
aa72d65c90
|
fix(utils.py): fix check for if cached response should be returned
|
2024-01-04 21:49:19 +05:30 |
|
Krrish Dholakia
|
773a0a147a
|
fix(utils.py): raise a bad request error if litellm client raises a model /provider not found error
|
2024-01-04 15:50:43 +05:30 |
|