Krrish Dholakia
|
2a9651b3ca
|
feat(openmeter.py): add support for user billing
open-meter supports user based billing. Closes https://github.com/BerriAI/litellm/issues/1268
|
2024-05-01 17:23:48 -07:00 |
|
Krrish Dholakia
|
37eb7910d2
|
fix(utils.py): fix azure streaming content filter error chunk
|
2024-05-01 13:45:50 -07:00 |
|
Ishaan Jaff
|
b3161bb20b
|
Merge pull request #3384 from BerriAI/litellm_fix_details_to
fix - error sending details to log on sentry
|
2024-05-01 11:44:54 -07:00 |
|
Ishaan Jaff
|
8e75d07bd0
|
fix - error seeing details to log on sentry
|
2024-05-01 11:43:14 -07:00 |
|
Krrish Dholakia
|
608fef60a6
|
fix(utils.py): check if response_object["choices"] is not none and iterable
|
2024-05-01 11:08:44 -07:00 |
|
Krrish Dholakia
|
e96ccb8edf
|
fix(utils.py): return received args for invalid model response object error
Addresses https://github.com/BerriAI/litellm/issues/3381
|
2024-05-01 10:47:26 -07:00 |
|
alisalim17
|
81ad331d92
|
set default tool calls and function call
|
2024-05-01 17:01:45 +04:00 |
|
alisalim17
|
20a796bacb
|
add tool_calls attribute to Message and Delta classes in order to improve type-safety
|
2024-05-01 13:47:01 +04:00 |
|
Krrish Dholakia
|
3cc82f558e
|
fix(utils.py): add exception mapping for gemini error
|
2024-04-30 14:17:10 -07:00 |
|
Krrish Dholakia
|
b46db8b891
|
feat(utils.py): json logs for raw request sent by litellm
make it easier to view verbose logs in datadog
|
2024-04-29 19:21:19 -07:00 |
|
Krish Dholakia
|
ec2510029a
|
Merge pull request #3354 from BerriAI/litellm_replicate_cost_tracking
fix(utils.py): replicate now also has token based pricing for some models
|
2024-04-29 09:13:41 -07:00 |
|
Krrish Dholakia
|
3725732c4d
|
fix(utils.py): default to time-based tracking for unmapped replicate models. fix time-based cost calc for replicate
|
2024-04-29 08:36:01 -07:00 |
|
Krrish Dholakia
|
a18844b230
|
fix(utils.py): use llama tokenizer for replicate models
|
2024-04-29 08:28:31 -07:00 |
|
Krrish Dholakia
|
ab954243e8
|
fix(utils.py): fix watson streaming
|
2024-04-29 08:09:59 -07:00 |
|
Krrish Dholakia
|
2cfb97141d
|
fix(utils.py): replicate now also has token based pricing for some models
|
2024-04-29 08:06:15 -07:00 |
|
Krish Dholakia
|
1841b74f49
|
Merge branch 'main' into litellm_common_auth_params
|
2024-04-28 08:38:06 -07:00 |
|
Krrish Dholakia
|
5f0f3f9fe3
|
fix(utils.py): don't return usage for streaming - openai spec
|
2024-04-27 14:13:34 -07:00 |
|
Ishaan Jaff
|
6762d07c7f
|
Merge pull request #3330 from BerriAI/litellm_rdct_msgs
[Feat] Redact Logging Messages/Response content on Logging Providers with `litellm.turn_off_message_logging=True`
|
2024-04-27 11:25:09 -07:00 |
|
Ishaan Jaff
|
4ce27e1219
|
fix - sentry data redaction
|
2024-04-27 11:23:08 -07:00 |
|
Krrish Dholakia
|
48f19cf839
|
feat(utils.py): unify common auth params across azure/vertex_ai/bedrock/watsonx
|
2024-04-27 11:06:18 -07:00 |
|
Ishaan Jaff
|
b2111a97e2
|
fix use redact_message_input_output_from_logging
|
2024-04-27 10:51:17 -07:00 |
|
Ishaan Jaff
|
10da35675f
|
feat- turn off message logging
|
2024-04-27 10:03:07 -07:00 |
|
Tejas Ravishankar
|
8ff9555bcf
|
fix: duplicate mention of VERTEXAI_PROJECT environment variable causing confusion
|
2024-04-27 17:47:28 +04:00 |
|
Krish Dholakia
|
2d976cfabc
|
Merge pull request #3270 from simonsanvil/feature/watsonx-integration
(feat) add IBM watsonx.ai as an llm provider
|
2024-04-27 05:48:34 -07:00 |
|
Krrish Dholakia
|
9eb75cc159
|
test(test_streaming.py): fix test
|
2024-04-25 20:22:18 -07:00 |
|
Krrish Dholakia
|
486c8ccc30
|
fix(utils.py): handle pydantic v1
|
2024-04-25 20:01:36 -07:00 |
|
Krrish Dholakia
|
850b056df5
|
fix(utils.py): add more logging to identify ci/cd issue
|
2024-04-25 19:57:24 -07:00 |
|
Krish Dholakia
|
40b6b4794b
|
Merge pull request #3310 from BerriAI/litellm_langfuse_error_logging_2
fix(proxy/utils.py): log rejected proxy requests to langfuse
|
2024-04-25 19:49:59 -07:00 |
|
Krrish Dholakia
|
885de2e3c6
|
fix(proxy/utils.py): log rejected proxy requests to langfuse
|
2024-04-25 19:26:27 -07:00 |
|
Krish Dholakia
|
69280177a3
|
Merge pull request #3308 from BerriAI/litellm_fix_streaming_n
fix(utils.py): fix the response object returned when n>1 for stream=true
|
2024-04-25 18:36:54 -07:00 |
|
Krrish Dholakia
|
86a3d24d75
|
fix(utils.py): pass through 'response_format' for mistral
|
2024-04-25 18:27:41 -07:00 |
|
Krrish Dholakia
|
1ebc7bb3b7
|
fix(utils.py): handle finish reason logic
|
2024-04-25 18:18:00 -07:00 |
|
Krrish Dholakia
|
9f5ba67f5d
|
fix(utils.py): return logprobs as an object not dict
|
2024-04-25 17:55:18 -07:00 |
|
Krrish Dholakia
|
6c5c7cca3d
|
fix(utils.py): fix the response object returned when n>1 for stream=true
Fixes https://github.com/BerriAI/litellm/issues/3276
|
2024-04-25 13:27:29 -07:00 |
|
Krish Dholakia
|
435a4b5ed4
|
Merge pull request #3267 from BerriAI/litellm_openai_streaming_fix
fix(utils.py): fix streaming to not return usage dict
|
2024-04-24 21:08:33 -07:00 |
|
Krrish Dholakia
|
dacadbf624
|
fix(utils.py): fix anthropic streaming return usage tokens
|
2024-04-24 20:56:10 -07:00 |
|
Krrish Dholakia
|
495aebb582
|
fix(utils.py): fix setattr error
|
2024-04-24 20:19:27 -07:00 |
|
Ishaan Jaff
|
ca4fd85296
|
fix show api_base, model in timeout errors
|
2024-04-24 14:01:32 -07:00 |
|
Krish Dholakia
|
263439ee4a
|
Merge pull request #3098 from greenscale-ai/main
Support for Greenscale AI logging
|
2024-04-24 13:09:03 -07:00 |
|
Krrish Dholakia
|
b918f58262
|
fix(vertex_ai.py): raise explicit error when image url fails to download - prevents silent failure
|
2024-04-24 09:23:15 -07:00 |
|
Krrish Dholakia
|
48c2c3d78a
|
fix(utils.py): fix streaming to not return usage dict
Fixes https://github.com/BerriAI/litellm/issues/3237
|
2024-04-24 08:06:07 -07:00 |
|
Krrish Dholakia
|
ab24f61099
|
fix(utils.py): fix mistral api tool calling response
|
2024-04-23 19:59:11 -07:00 |
|
Krish Dholakia
|
4acdde988f
|
Merge pull request #3250 from BerriAI/litellm_caching_no_cache_fix
fix(utils.py): fix 'no-cache': true when caching is turned on
|
2024-04-23 19:57:07 -07:00 |
|
Krrish Dholakia
|
d67e47d7fd
|
fix(test_caching.py): add longer delay for async test
|
2024-04-23 16:13:03 -07:00 |
|
David Manouchehri
|
69ddd7c68f
|
(utils.py) - Add seed for Groq
|
2024-04-23 20:32:21 +00:00 |
|
Krrish Dholakia
|
161e836427
|
fix(utils.py): fix 'no-cache': true when caching is turned on
|
2024-04-23 12:58:30 -07:00 |
|
Simon S. Viloria
|
2ef4fb2efa
|
Merge branch 'BerriAI:main' into feature/watsonx-integration
|
2024-04-23 12:18:34 +02:00 |
|
Simon Sanchez Viloria
|
74d2ba0a23
|
feat - watsonx refractoring, removed dependency, and added support for embedding calls
|
2024-04-23 12:01:13 +02:00 |
|
David Manouchehri
|
6d61607ee3
|
(utils.py) - Fix response_format typo for Groq
|
2024-04-23 04:26:26 +00:00 |
|
Krrish Dholakia
|
be4a3de27c
|
fix(utils.py): support deepinfra response object
|
2024-04-22 10:51:11 -07:00 |
|