mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 03:34:10 +00:00
* (refactor) use _assemble_complete_response_from_streaming_chunks * add unit test for test_assemble_complete_response_from_streaming_chunks_1 * fix assemble complete_streaming_response * config add logging_testing * add logging_coverage in codecov * test test_assemble_complete_response_from_streaming_chunks_3 * add unit tests for _assemble_complete_response_from_streaming_chunks * fix remove unused / junk function * add test for streaming_chunks when error assembling |
||
---|---|---|
.. | ||
audio_utils | ||
llm_cost_calc | ||
asyncify.py | ||
core_helpers.py | ||
exception_mapping_utils.py | ||
get_llm_provider_logic.py | ||
json_validation_rule.py | ||
litellm_logging.py | ||
llm_request_utils.py | ||
logging_utils.py | ||
mock_functions.py | ||
realtime_streaming.py | ||
redact_messages.py | ||
response_header_helpers.py | ||
streaming_utils.py | ||
token_counter.py |