llama-stack-mirror/llama_stack/providers
Matthew Farrellee 47c078fcef
feat: implement dynamic model detection support for inference providers using litellm (#2886)
# What does this PR do?

This enhancement allows inference providers using LiteLLMOpenAIMixin to
validate model availability against LiteLLM's official provider model
listings, improving reliability and user experience when working with
different AI service providers.

- Add litellm_provider_name parameter to LiteLLMOpenAIMixin constructor
- Add check_model_availability method to LiteLLMOpenAIMixin using
litellm.models_by_provider
- Update Gemini, Groq, and SambaNova inference adapters to pass
litellm_provider_name

## Test Plan

standard CI.
2025-07-28 10:13:54 -07:00
..
inline feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
registry feat: implement chunk deletion for vector stores (#2701) 2025-07-25 10:30:30 -04:00
remote fix: litellm_provider_name for llama-api (#2934) 2025-07-28 10:02:16 -07:00
utils feat: implement dynamic model detection support for inference providers using litellm (#2886) 2025-07-28 10:13:54 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00