Krrish Dholakia
|
4efaacc5ce
|
fix(anthropic.py): handle whitespace characters for anthropic calls
|
2024-05-03 17:31:34 -07:00 |
|
Krrish Dholakia
|
001e7976da
|
docs(exception_mapping.md): update exception mapping docs with 'should_retry'
|
2024-05-03 16:51:26 -07:00 |
|
Ishaan Jaff
|
4819119823
|
Merge pull request #3432 from BerriAI/litellm_add_bedrock_titan_embed
Feat - add bedrock titan embed-v2
|
2024-05-03 16:28:12 -07:00 |
|
Krrish Dholakia
|
7715a9d333
|
fix(bedrock.py): convert httpx.timeout to boto3 valid timeout
Closes https://github.com/BerriAI/litellm/issues/3398
|
2024-05-03 16:24:21 -07:00 |
|
Ishaan Jaff
|
8943d7b3e6
|
test - bedrock v2 supports dimension
|
2024-05-03 16:23:37 -07:00 |
|
Ishaan Jaff
|
c468870c3d
|
fix - support dimension for titan embed v2
|
2024-05-03 16:13:27 -07:00 |
|
Krrish Dholakia
|
aca36d8b79
|
feat(proxy_server.py): return api base in response headers
Closes https://github.com/BerriAI/litellm/issues/2631
|
2024-05-03 15:27:32 -07:00 |
|
Krrish Dholakia
|
d8446ffaf4
|
test(test_amazing_vertex_completion.py): try-except api errors
|
2024-05-03 10:09:57 -07:00 |
|
Krish Dholakia
|
7e04447159
|
Merge pull request #3393 from Priva28/main
Add Llama3 tokenizer and allow custom tokenizers.
|
2024-05-02 16:32:41 -07:00 |
|
Ishaan Jaff
|
1dba007648
|
Merge pull request #3392 from BerriAI/litellm_fix_langfuse_reinitalized
[Fix] bug where langfuse was reinitialized on every call
|
2024-05-02 10:54:47 -07:00 |
|
Krrish Dholakia
|
6652e32869
|
fix(utils.py): revert pydantic object changes
|
2024-05-02 08:14:45 -07:00 |
|
Krrish Dholakia
|
c0487b16af
|
fix(utils.py): add missing providers + models to validate_environment
Closes https://github.com/BerriAI/litellm/issues/3190
|
2024-05-02 08:14:45 -07:00 |
|
Krrish Dholakia
|
4fe55a28fc
|
fix(utils.py): fix pydantic object default values
|
2024-05-01 22:51:46 -07:00 |
|
Christian Privitelli
|
8b4bc4c832
|
include methods in init import, add test, fix encode/decode param ordering
|
2024-05-02 15:49:22 +10:00 |
|
Krrish Dholakia
|
a241cdb818
|
fix(utils.py): fix default values
|
2024-05-01 22:24:37 -07:00 |
|
Christian Privitelli
|
f461f6b746
|
update docs, allow use with encode/decode
|
2024-05-02 14:56:38 +10:00 |
|
Krish Dholakia
|
14531ed6fa
|
Merge pull request #3379 from elisalimli/main
[chore] Improve type-safety in Message & Delta classes
|
2024-05-01 21:19:52 -07:00 |
|
Krrish Dholakia
|
0b1382aef4
|
build(ui): enable adding openmeter via proxy ui
|
2024-05-01 21:16:23 -07:00 |
|
Christian Privitelli
|
2736a2c671
|
fix docs
|
2024-05-02 14:11:03 +10:00 |
|
Christian Privitelli
|
ff16701a4b
|
add llama3 tokenizer and support for custom tokenizer
|
2024-05-02 14:05:38 +10:00 |
|
Ishaan Jaff
|
b818de3822
|
fix bug where langfuse was reinitialized
|
2024-05-01 20:47:25 -07:00 |
|
Krrish Dholakia
|
e7b3ac8e06
|
feat(openmeter.py): add support for user billing
open-meter supports user based billing. Closes https://github.com/BerriAI/litellm/issues/1268
|
2024-05-01 17:23:48 -07:00 |
|
Krrish Dholakia
|
b89f5aefb4
|
fix(utils.py): fix azure streaming content filter error chunk
|
2024-05-01 13:45:50 -07:00 |
|
Ishaan Jaff
|
ef53b2ecf2
|
Merge pull request #3384 from BerriAI/litellm_fix_details_to
fix - error sending details to log on sentry
|
2024-05-01 11:44:54 -07:00 |
|
Ishaan Jaff
|
b63dac8442
|
fix - error seeing details to log on sentry
|
2024-05-01 11:43:14 -07:00 |
|
Krrish Dholakia
|
abf14c61b3
|
fix(utils.py): check if response_object["choices"] is not none and iterable
|
2024-05-01 11:08:44 -07:00 |
|
Krrish Dholakia
|
b05fe5539e
|
fix(utils.py): return received args for invalid model response object error
Addresses https://github.com/BerriAI/litellm/issues/3381
|
2024-05-01 10:47:26 -07:00 |
|
alisalim17
|
f676f53f6d
|
set default tool calls and function call
|
2024-05-01 17:01:45 +04:00 |
|
alisalim17
|
fd069fc4f6
|
add tool_calls attribute to Message and Delta classes in order to improve type-safety
|
2024-05-01 13:47:01 +04:00 |
|
Krrish Dholakia
|
c18fb79761
|
fix(utils.py): add exception mapping for gemini error
|
2024-04-30 14:17:10 -07:00 |
|
Krrish Dholakia
|
701be55872
|
feat(utils.py): json logs for raw request sent by litellm
make it easier to view verbose logs in datadog
|
2024-04-29 19:21:19 -07:00 |
|
Krish Dholakia
|
f55d6b686b
|
Merge pull request #3354 from BerriAI/litellm_replicate_cost_tracking
fix(utils.py): replicate now also has token based pricing for some models
|
2024-04-29 09:13:41 -07:00 |
|
Krrish Dholakia
|
2a88543f4d
|
fix(utils.py): default to time-based tracking for unmapped replicate models. fix time-based cost calc for replicate
|
2024-04-29 08:36:01 -07:00 |
|
Krrish Dholakia
|
453f3d068d
|
fix(utils.py): use llama tokenizer for replicate models
|
2024-04-29 08:28:31 -07:00 |
|
Krrish Dholakia
|
6d3dc05da2
|
fix(utils.py): fix watson streaming
|
2024-04-29 08:09:59 -07:00 |
|
Krrish Dholakia
|
945bc96389
|
fix(utils.py): replicate now also has token based pricing for some models
|
2024-04-29 08:06:15 -07:00 |
|
Krish Dholakia
|
7ed64709aa
|
Merge branch 'main' into litellm_common_auth_params
|
2024-04-28 08:38:06 -07:00 |
|
Krrish Dholakia
|
653e7fc584
|
fix(utils.py): don't return usage for streaming - openai spec
|
2024-04-27 14:13:34 -07:00 |
|
Ishaan Jaff
|
48d590c428
|
Merge pull request #3330 from BerriAI/litellm_rdct_msgs
[Feat] Redact Logging Messages/Response content on Logging Providers with `litellm.turn_off_message_logging=True`
|
2024-04-27 11:25:09 -07:00 |
|
Ishaan Jaff
|
c83a9a677f
|
fix - sentry data redaction
|
2024-04-27 11:23:08 -07:00 |
|
Krrish Dholakia
|
ee68c0aaaa
|
feat(utils.py): unify common auth params across azure/vertex_ai/bedrock/watsonx
|
2024-04-27 11:06:18 -07:00 |
|
Ishaan Jaff
|
37917b2bf7
|
fix use redact_message_input_output_from_logging
|
2024-04-27 10:51:17 -07:00 |
|
Ishaan Jaff
|
7191e5f593
|
feat- turn off message logging
|
2024-04-27 10:03:07 -07:00 |
|
Tejas Ravishankar
|
44f441ea74
|
fix: duplicate mention of VERTEXAI_PROJECT environment variable causing confusion
|
2024-04-27 17:47:28 +04:00 |
|
Krish Dholakia
|
b7beab2e39
|
Merge pull request #3270 from simonsanvil/feature/watsonx-integration
(feat) add IBM watsonx.ai as an llm provider
|
2024-04-27 05:48:34 -07:00 |
|
Krrish Dholakia
|
85a331024a
|
test(test_streaming.py): fix test
|
2024-04-25 20:22:18 -07:00 |
|
Krrish Dholakia
|
ec548df1d6
|
fix(utils.py): handle pydantic v1
|
2024-04-25 20:01:36 -07:00 |
|
Krrish Dholakia
|
2c2e4319ab
|
fix(utils.py): add more logging to identify ci/cd issue
|
2024-04-25 19:57:24 -07:00 |
|
Krish Dholakia
|
f9814105f7
|
Merge pull request #3310 from BerriAI/litellm_langfuse_error_logging_2
fix(proxy/utils.py): log rejected proxy requests to langfuse
|
2024-04-25 19:49:59 -07:00 |
|
Krrish Dholakia
|
462724feef
|
fix(proxy/utils.py): log rejected proxy requests to langfuse
|
2024-04-25 19:26:27 -07:00 |
|