mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 02:34:29 +00:00
* fix(route_llm_request.py): move to using common router, even for client-side credentials ensures fallbacks / cooldown logic still works * test(test_route_llm_request.py): add unit test for route request * feat(router.py): generate unique model id when clientside credential passed in Prevents cooldowns for api key 1 from impacting api key 2 * test(test_router.py): update testing to ensure original litellm params not mutated * fix(router.py): upsert clientside call into llm router model list enables cooldown logic to work accurately * fix: fix linting error * test(test_router_utils.py): add direct test for new util on router |
||
---|---|---|
.. | ||
pre_call_checks | ||
router_callbacks | ||
add_retry_fallback_headers.py | ||
batch_utils.py | ||
client_initalization_utils.py | ||
clientside_credential_handler.py | ||
cooldown_cache.py | ||
cooldown_callbacks.py | ||
cooldown_handlers.py | ||
fallback_event_handlers.py | ||
get_retry_from_policy.py | ||
handle_error.py | ||
pattern_match_deployments.py | ||
prompt_caching_cache.py | ||
response_headers.py |