mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 11:14:04 +00:00
* feat(router.py): support passing model-specific messages in fallbacks * docs(routing.md): separate router timeouts into separate doc allow for 1 fallbacks doc (across proxy/router) * docs(routing.md): cleanup router docs * docs(reliability.md): cleanup docs * docs(reliability.md): cleaned up fallback doc just have 1 doc across sdk/proxy simplifies docs * docs(reliability.md): add setting model-specific fallback prompts * fix: fix linting errors * test: skip test causing openai rate limit errros * test: fix test * test: run vertex test first to catch error |
||
---|---|---|
.. | ||
pre_call_checks | ||
router_callbacks | ||
batch_utils.py | ||
client_initalization_utils.py | ||
cooldown_cache.py | ||
cooldown_callbacks.py | ||
cooldown_handlers.py | ||
fallback_event_handlers.py | ||
get_retry_from_policy.py | ||
handle_error.py | ||
pattern_match_deployments.py | ||
prompt_caching_cache.py | ||
response_headers.py |