mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 18:54:30 +00:00
* fix(spend_tracking_utils.py): prevent logging entire mp4 files to db Fixes https://github.com/BerriAI/litellm/issues/9732 * fix(anthropic/chat/transformation.py): Fix double counting cache creation input tokens Fixes https://github.com/BerriAI/litellm/issues/9812 * refactor(anthropic/chat/transformation.py): refactor streaming to use same usage calculation block as non-streaming reduce errors * fix(bedrock/chat/converse_transformation.py): don't increment prompt tokens with cache_creation_input_tokens * build: remove redisvl from requirements.txt (temporary) * fix(spend_tracking_utils.py): handle circular references * test: update code cov test * test: update test |
||
---|---|---|
.. | ||
azure_client_usage_test.py | ||
ban_constant_numbers.py | ||
bedrock_pricing.py | ||
callback_manager_test.py | ||
check_data_replace_usage.py | ||
check_licenses.py | ||
enforce_llms_folder_style.py | ||
ensure_async_clients_test.py | ||
liccheck.ini | ||
license_cache.json | ||
litellm_logging_code_coverage.py | ||
log.txt | ||
pass_through_code_coverage.py | ||
prevent_key_leaks_in_exceptions.py | ||
recursive_detector.py | ||
router_code_coverage.py | ||
router_enforce_line_length.py | ||
test_aio_http_image_conversion.py | ||
test_router_strategy_async.py | ||
user_api_key_auth_code_coverage.py |