Krrish Dholakia
|
ac5c6c8751
|
fix(litellm_pre_call_utils.py): handle dynamic keys via api correctly
|
2024-08-21 13:37:21 -07:00 |
|
Krrish Dholakia
|
77a6f597e0
|
fix(litellm_logging.py): add stricter check for special param being non none
|
2024-08-20 21:35:02 -07:00 |
|
Krrish Dholakia
|
0091f64ff1
|
fix(utils.py): ensure consistent cost calc b/w returned header and logged object
|
2024-08-20 19:01:20 -07:00 |
|
Krish Dholakia
|
e49e454929
|
Merge pull request #5287 from BerriAI/litellm_fix_response_cost_cal
fix(cost_calculator.py): only override base model if custom pricing is set
|
2024-08-20 11:42:48 -07:00 |
|
Ishaan Jaff
|
c82714757a
|
Merge pull request #5288 from BerriAI/litellm_aporia_refactor
[Feat] V2 aporia guardrails litellm
|
2024-08-19 20:41:45 -07:00 |
|
Krrish Dholakia
|
cf1a1605a6
|
feat(cost_calculator.py): only override base model if custom pricing is set
|
2024-08-19 16:05:49 -07:00 |
|
Ishaan Jaff
|
6af497e383
|
feat run aporia as post call success hook
|
2024-08-19 11:25:31 -07:00 |
|
Krrish Dholakia
|
0d82089136
|
test(test_caching.py): re-introduce testing for s3 cache w/ streaming
Closes https://github.com/BerriAI/litellm/issues/3268
|
2024-08-19 10:56:48 -07:00 |
|
Krrish Dholakia
|
1856ac585d
|
feat(pass_through_endpoints.py): add pass-through support for all cohere endpoints
|
2024-08-17 16:57:55 -07:00 |
|
Krrish Dholakia
|
29bedae79f
|
feat(google_ai_studio_endpoints.py): support pass-through endpoint for all google ai studio requests
New Feature
|
2024-08-17 10:46:59 -07:00 |
|
Krish Dholakia
|
88fccb2427
|
Merge branch 'main' into litellm_log_model_price_information
|
2024-08-16 19:34:16 -07:00 |
|
Krish Dholakia
|
0916197c9d
|
Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 19:16:20 -07:00 |
|
Ishaan Jaff
|
937471223a
|
fix databricks streaming test
|
2024-08-16 16:56:08 -07:00 |
|
Krrish Dholakia
|
9609505d0c
|
fix(litellm_logging.py): fix price information logging to s3
|
2024-08-16 16:42:38 -07:00 |
|
Krrish Dholakia
|
ef51f8600d
|
feat(litellm_logging.py): support logging model price information to s3 logs
|
2024-08-16 16:21:34 -07:00 |
|
Krrish Dholakia
|
2874b94fb1
|
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 09:22:47 -07:00 |
|
Krrish Dholakia
|
b381dec0a8
|
fix(litellm_logging.py): wrap function to safely fail
|
2024-08-15 18:05:06 -07:00 |
|
Krrish Dholakia
|
c0448b9641
|
feat(litellm_logging.py): cleanup payload + add response cost to logged payload
|
2024-08-15 17:53:25 -07:00 |
|
Krrish Dholakia
|
cf87c64348
|
fix(litellm_logging.py): fix standard payload
|
2024-08-15 17:33:40 -07:00 |
|
Krrish Dholakia
|
b08492bc29
|
fix(s3.py): fix s3 logging payload to have valid json values
Previously pydantic objects were being stringified, making them unparsable
|
2024-08-15 17:09:02 -07:00 |
|
Ishaan Jaff
|
8e90139377
|
refactor prometheus to be a customLogger class
|
2024-08-10 09:28:46 -07:00 |
|
Ishaan Jaff
|
bb6edc75a9
|
use customLogger for prometheus logger
|
2024-08-10 09:15:23 -07:00 |
|
Krrish Dholakia
|
9d2410abb1
|
fix(litellm_logging.py): fix calling success callback w/ stream_options true
Fixes https://github.com/BerriAI/litellm/issues/5118
|
2024-08-09 18:20:42 -07:00 |
|
Ishaan Jaff
|
c6799e8aad
|
fix use get_file_check_sum
|
2024-08-08 08:03:08 -07:00 |
|
Ishaan Jaff
|
c9856d91c7
|
fix linting errors
|
2024-08-05 08:54:04 -07:00 |
|
Ishaan Jaff
|
e978e37487
|
use util convert_litellm_response_object_to_dict
|
2024-08-05 08:40:19 -07:00 |
|
Krrish Dholakia
|
8204037975
|
fix(utils.py): fix codestral streaming
|
2024-08-02 07:38:06 -07:00 |
|
Krrish Dholakia
|
857ec4af18
|
feat(litellm_logging.py): log exception response headers to langfuse
|
2024-08-01 18:07:47 -07:00 |
|
Ishaan Jaff
|
1a8d27299b
|
init gcs using gcs_bucket
|
2024-08-01 18:07:38 -07:00 |
|
Krrish Dholakia
|
0f625f5d8c
|
fix(google.py): fix cost tracking for vertex ai mistral models
|
2024-08-01 18:07:38 -07:00 |
|
Krrish Dholakia
|
3378273201
|
fix(litellm_logging.py): fix linting erros
|
2024-08-01 17:32:22 -07:00 |
|
Krrish Dholakia
|
08541d056c
|
fix(litellm_logging.py): use 1 cost calc function across response headers + logging integrations
Ensures consistent cost calculation when azure base models are used
|
2024-08-01 10:26:59 -07:00 |
|
Krrish Dholakia
|
802e39b606
|
fix(utils.py): fix cost tracking for vertex ai partner models
|
2024-07-30 14:20:52 -07:00 |
|
Ishaan Jaff
|
7cfcc2aac1
|
refactor use common helper
|
2024-07-27 11:39:03 -07:00 |
|
Krrish Dholakia
|
02b886d741
|
feat(vertex_httpx.py): support logging vertex ai safety results to langfuse
Closes https://github.com/BerriAI/litellm/issues/3230
|
2024-07-26 20:50:43 -07:00 |
|
Krrish Dholakia
|
1562cba823
|
fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
|
2024-07-26 19:04:08 -07:00 |
|
Krrish Dholakia
|
d3ff21181c
|
fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking
|
2024-07-25 22:12:07 -07:00 |
|
Krish Dholakia
|
7cf9620b12
|
Merge branch 'main' into litellm_braintrust_integration
|
2024-07-22 22:40:39 -07:00 |
|
Krrish Dholakia
|
8c005d8134
|
feat(redact_messages.py): allow remove sensitive key information before passing to logging integration
|
2024-07-22 20:58:02 -07:00 |
|
Krrish Dholakia
|
d4c72f913c
|
feat(braintrust_logging.py): working braintrust logging for successful calls
|
2024-07-22 17:04:55 -07:00 |
|
Ishaan Jaff
|
9560a459c2
|
feat - add support to init arize ai
|
2024-07-22 10:58:20 -07:00 |
|
Krish Dholakia
|
2c6482b6e1
|
Merge branch 'main' into litellm_anthropic_response_schema_support
|
2024-07-18 20:40:16 -07:00 |
|
Krrish Dholakia
|
aac912d3f8
|
feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls
allows passing response_schema for anthropic calls. supports schema validation.
|
2024-07-18 16:57:38 -07:00 |
|
Ishaan Jaff
|
4d9f63632d
|
fix custom Logger is None
|
2024-07-17 17:27:46 -07:00 |
|
Ishaan Jaff
|
b473e8da83
|
Merge pull request #4758 from BerriAI/litellm_langsmith_async_support
[Feat] Use Async Httpx client for langsmith logging
|
2024-07-17 16:54:40 -07:00 |
|
Ishaan Jaff
|
d3ee7a947c
|
use langsmith as a custom callback class
|
2024-07-17 15:35:13 -07:00 |
|
Krrish Dholakia
|
e91f6153c8
|
fix(litellm_logging.py): fix async caching for sync streaming calls (don't do it)
Checks if call is async before running async caching for streaming call
Fixes https://github.com/BerriAI/litellm/issues/4511#issuecomment-2233211808
|
2024-07-17 11:15:30 -07:00 |
|
Ishaan Jaff
|
6d23b78a92
|
fix remove index from tool calls cohere error
|
2024-07-16 21:49:45 -07:00 |
|
Krrish Dholakia
|
5ba56191c4
|
fix(litellm_logging.py): fix circular reference
|
2024-07-15 21:28:33 -07:00 |
|
Krrish Dholakia
|
4687b12732
|
fix(litellm_logging.py): log response_cost=0 for failed calls
Fixes https://github.com/BerriAI/litellm/issues/4604
|
2024-07-15 19:25:56 -07:00 |
|