mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-29 07:14:20 +00:00
# What does this PR do? This enhancement allows inference providers using LiteLLMOpenAIMixin to validate model availability against LiteLLM's official provider model listings, improving reliability and user experience when working with different AI service providers. - Add litellm_provider_name parameter to LiteLLMOpenAIMixin constructor - Add check_model_availability method to LiteLLMOpenAIMixin using litellm.models_by_provider - Update Gemini, Groq, and SambaNova inference adapters to pass litellm_provider_name ## Test Plan standard CI. |
||
---|---|---|
.. | ||
__init__.py | ||
embedding_mixin.py | ||
inference_store.py | ||
litellm_openai_mixin.py | ||
model_registry.py | ||
openai_compat.py | ||
openai_mixin.py | ||
prompt_adapter.py | ||
stream_utils.py |