mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 11:14:04 +00:00
* refactor(main.py): streaming_chunk_builder use <100 lines of code refactor each component into a separate function - easier to maintain + test * fix(utils.py): handle choices being None openai pydantic schema updated * fix(main.py): fix linting error * feat(streaming_chunk_builder_utils.py): update stream chunk builder to support rebuilding audio chunks from openai * test(test_custom_callback_input.py): test message redaction works for audio output * fix(streaming_chunk_builder_utils.py): return anthropic token usage info directly * fix(stream_chunk_builder_utils.py): run validation check before entering chunk processor * fix(main.py): fix import |
||
---|---|---|
.. | ||
audio_utils | ||
llm_cost_calc | ||
asyncify.py | ||
core_helpers.py | ||
exception_mapping_utils.py | ||
get_llm_provider_logic.py | ||
json_validation_rule.py | ||
litellm_logging.py | ||
llm_request_utils.py | ||
logging_utils.py | ||
mock_functions.py | ||
realtime_streaming.py | ||
redact_messages.py | ||
response_header_helpers.py | ||
streaming_chunk_builder_utils.py | ||
streaming_utils.py | ||
token_counter.py |