mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 19:24:27 +00:00
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
* feat(router.py): add retry headers to response makes it easy to add testing to ensure model-specific retries are respected * fix(add_retry_headers.py): clarify attempted retries vs. max retries * test(test_fallbacks.py): add test for checking if max retries set for model is respected * test(test_fallbacks.py): assert values for attempted retries and max retries are as expected * fix(utils.py): return timeout in litellm proxy response headers * test(test_fallbacks.py): add test to assert model specific timeout used on timeout error * test: add bad model with timeout to proxy * fix: fix linting error * fix(router.py): fix get model list from model alias * test: loosen test restriction - account for other events on proxy |
||
---|---|---|
.. | ||
pre_call_checks | ||
router_callbacks | ||
add_retry_headers.py | ||
batch_utils.py | ||
client_initalization_utils.py | ||
cooldown_cache.py | ||
cooldown_callbacks.py | ||
cooldown_handlers.py | ||
fallback_event_handlers.py | ||
get_retry_from_policy.py | ||
handle_error.py | ||
pattern_match_deployments.py | ||
prompt_caching_cache.py | ||
response_headers.py |