litellm-mirror/litellm/litellm_core_utils
Ishaan Jaff 670ecda4e2
(fixes) gcs bucket key based logging (#6044)
* fixes for gcs bucket logging

* fix StandardCallbackDynamicParams

* fix - gcs logging when payload is not serializable

* add test_add_callback_via_key_litellm_pre_call_utils_gcs_bucket

* working success callbacks

* linting fixes

* fix linting error

* add type hints to functions

* fixes for dynamic success and failure logging

* fix for test_async_chat_openai_stream
2024-10-04 11:56:10 +05:30
..
audio_utils fix import error 2024-09-05 10:09:44 -07:00
llm_cost_calc use cost per token for jamba 2024-08-27 14:18:04 -07:00
asyncify.py build(config.yml): bump anyio version 2024-08-27 07:37:06 -07:00
core_helpers.py [Feat] Improve OTEL Tracking - Require all Redis Cache reads to be logged on OTEL (#5881) 2024-09-25 10:57:08 -07:00
exception_mapping_utils.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
get_llm_provider_logic.py OpenAI /v1/realtime api support (#6047) 2024-10-03 17:11:22 -04:00
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py (fixes) gcs bucket key based logging (#6044) 2024-10-04 11:56:10 +05:30
llm_request_utils.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
logging_utils.py feat run aporia as post call success hook 2024-08-19 11:25:31 -07:00
redact_messages.py refactor redact_message_input_output_from_custom_logger 2024-09-09 16:00:47 -07:00
response_header_helpers.py fix(utils.py): guarantee openai-compatible headers always exist in response 2024-09-28 21:08:15 -07:00
streaming_utils.py fix(streaming_utils.py): fix generic_chunk_has_all_required_fields 2024-08-26 21:13:02 -07:00
token_counter.py fix(token_counter.py): New `get_modified_max_tokens' helper func 2024-06-27 15:38:09 -07:00