litellm-mirror/litellm/litellm_core_utils
2024-09-28 21:08:15 -07:00
..
audio_utils fix import error 2024-09-05 10:09:44 -07:00
llm_cost_calc use cost per token for jamba 2024-08-27 14:18:04 -07:00
asyncify.py build(config.yml): bump anyio version 2024-08-27 07:37:06 -07:00
core_helpers.py [Feat] Improve OTEL Tracking - Require all Redis Cache reads to be logged on OTEL (#5881) 2024-09-25 10:57:08 -07:00
exception_mapping_utils.py LiteLLM Minor Fixes & Improvements (09/27/2024) (#5938) 2024-09-27 22:52:57 -07:00
get_llm_provider_logic.py [Feat] Add fireworks AI embedding (#5812) 2024-09-20 22:23:28 -07:00
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py fix(litellm_logging.py): fix linting error 2024-09-28 21:08:14 -07:00
llm_request_utils.py [Feat] Add fireworks AI embedding (#5812) 2024-09-20 22:23:28 -07:00
logging_utils.py feat run aporia as post call success hook 2024-08-19 11:25:31 -07:00
redact_messages.py refactor redact_message_input_output_from_custom_logger 2024-09-09 16:00:47 -07:00
response_header_helpers.py fix(utils.py): guarantee openai-compatible headers always exist in response 2024-09-28 21:08:15 -07:00
streaming_utils.py fix(streaming_utils.py): fix generic_chunk_has_all_required_fields 2024-08-26 21:13:02 -07:00
token_counter.py fix(token_counter.py): New `get_modified_max_tokens' helper func 2024-06-27 15:38:09 -07:00