feat: implement dynamic model detection support for inference providers using litellm

This enhancement allows inference providers using LiteLLMOpenAIMixin to validate
model availability against LiteLLM's official provider model listings, improving
reliability and user experience when working with different AI service providers.

- Add litellm_provider_name parameter to LiteLLMOpenAIMixin constructor
- Add check_model_availability method to LiteLLMOpenAIMixin using litellm.models_by_provider
- Update Gemini, Groq, and SambaNova inference adapters to pass litellm_provider_name
This commit is contained in:
Matthew Farrellee 2025-07-24 09:49:32 -04:00
parent cd8715d327
commit bf63470c22
4 changed files with 33 additions and 0 deletions

View file

@ -184,6 +184,7 @@ class SambaNovaInferenceAdapter(LiteLLMOpenAIMixin):
model_entries=MODEL_ENTRIES,
api_key_from_config=self.config.api_key.get_secret_value() if self.config.api_key else None,
provider_data_api_key_field="sambanova_api_key",
litellm_provider_name="sambanova",
)
def _get_api_key(self) -> str: