mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 11:14:04 +00:00
* fix(utils.py): default custom_llm_provider=None for 'supports_response_schema' Closes https://github.com/BerriAI/litellm/issues/7397 * refactor(langfuse/): call langfuse logger inside customlogger compatible langfuse class, refactor langfuse logger to use verbose_logger.debug instead of print_verbose * refactor(litellm_pre_call_utils.py): move config based team callbacks inside dynamic team callback logic enables simpler unit testing for config-based team callbacks * fix(proxy/_types.py): handle teamcallbackmetadata - none values drop none values if present. if all none, use default dict to avoid downstream errors * test(test_proxy_utils.py): add unit test preventing future issues - asserts team_id in config state not popped off across calls Fixes https://github.com/BerriAI/litellm/issues/6787 * fix(langfuse_prompt_management.py): add success + failure logging event support * fix: fix linting error * test: fix test * test: fix test * test: override o1 prompt caching - openai currently not working * test: fix test |
||
---|---|---|
.. | ||
audio_utils | ||
llm_cost_calc | ||
llm_response_utils | ||
prompt_templates | ||
tokenizers | ||
asyncify.py | ||
core_helpers.py | ||
default_encoding.py | ||
duration_parser.py | ||
exception_mapping_utils.py | ||
get_llm_provider_logic.py | ||
get_supported_openai_params.py | ||
json_validation_rule.py | ||
litellm_logging.py | ||
llm_request_utils.py | ||
logging_utils.py | ||
mock_functions.py | ||
README.md | ||
realtime_streaming.py | ||
redact_messages.py | ||
response_header_helpers.py | ||
rules.py | ||
streaming_chunk_builder_utils.py | ||
streaming_handler.py | ||
token_counter.py |
Folder Contents
This folder contains general-purpose utilities that are used in multiple places in the codebase.
Core files:
streaming_handler.py
: The core streaming logic + streaming related helper utilscore_helpers.py
: code used intypes/
- e.g.map_finish_reason
.exception_mapping_utils.py
: utils for mapping exceptions to openai-compatible error types.default_encoding.py
: code for loading the default encoding (tiktoken)get_llm_provider_logic.py
: code for inferring the LLM provider from a given model name.duration_parser.py
: code for parsing durations - e.g. "1d", "1mo", "10s"