mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 02:34:29 +00:00
* feat(utils.py): support global flag for 'check_provider_endpoints' enables setting this for `/models` on proxy * feat(utils.py): add caching to 'get_valid_models' Prevents checking endpoint repeatedly * fix(utils.py): ensure mutations don't impact cached results * test(test_utils.py): add unit test to confirm cache invalidation logic * feat(utils.py): get_valid_models - support passing litellm params dynamically Allows for checking endpoints based on received credentials * test: update test * feat(model_checks.py): pass router credentials to get_valid_models - ensures it checks correct credentials * refactor(utils.py): refactor for simpler functions * fix: fix linting errors * fix(utils.py): fix test * fix(utils.py): set valid providers to custom_llm_provider, if given * test: update test * fix: fix ruff check error |
||
---|---|---|
.. | ||
auth_checks.py | ||
auth_checks_organization.py | ||
auth_exception_handler.py | ||
auth_utils.py | ||
handle_jwt.py | ||
litellm_license.py | ||
model_checks.py | ||
oauth2_check.py | ||
oauth2_proxy_hook.py | ||
public_key.pem | ||
rds_iam_token.py | ||
route_checks.py | ||
user_api_key_auth.py |