mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-24 18:24:20 +00:00
* feat(utils.py): support global flag for 'check_provider_endpoints' enables setting this for `/models` on proxy * feat(utils.py): add caching to 'get_valid_models' Prevents checking endpoint repeatedly * fix(utils.py): ensure mutations don't impact cached results * test(test_utils.py): add unit test to confirm cache invalidation logic * feat(utils.py): get_valid_models - support passing litellm params dynamically Allows for checking endpoints based on received credentials * test: update test * feat(model_checks.py): pass router credentials to get_valid_models - ensures it checks correct credentials * refactor(utils.py): refactor for simpler functions * fix: fix linting errors * fix(utils.py): fix test * fix(utils.py): set valid providers to custom_llm_provider, if given * test: update test * fix: fix ruff check error |
||
---|---|---|
.. | ||
conftest.py | ||
log.txt | ||
test_aws_secret_manager.py | ||
test_get_secret.py | ||
test_hashicorp.py | ||
test_litellm_overhead.py | ||
test_logging_callback_manager.py | ||
test_proxy_budget_reset.py | ||
test_secret_manager.py | ||
test_supports_tool_choice.py | ||
test_utils.py | ||
vertex_key.json |