litellm-mirror/litellm/litellm_core_utils
2024-08-28 22:50:33 -07:00
..
llm_cost_calc use cost per token for jamba 2024-08-27 14:18:04 -07:00
asyncify.py build(config.yml): bump anyio version 2024-08-27 07:37:06 -07:00
core_helpers.py fix use get_file_check_sum 2024-08-08 08:03:08 -07:00
exception_mapping_utils.py fix - error str in OpenAI, Azure exception 2024-06-29 13:11:55 -07:00
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py fix(utils.py): correctly log streaming cache hits (#5417) (#5426) 2024-08-28 22:50:33 -07:00
llm_request_utils.py fix llm_request_utils 2024-06-18 20:06:58 -07:00
logging_utils.py feat run aporia as post call success hook 2024-08-19 11:25:31 -07:00
redact_messages.py feat(redact_messages.py): allow remove sensitive key information before passing to logging integration 2024-07-22 20:58:02 -07:00
streaming_utils.py fix(streaming_utils.py): fix generic_chunk_has_all_required_fields 2024-08-26 21:13:02 -07:00
token_counter.py fix(token_counter.py): New `get_modified_max_tokens' helper func 2024-06-27 15:38:09 -07:00