llama-stack-mirror/llama_stack
Matthew Farrellee 47c078fcef
feat: implement dynamic model detection support for inference providers using litellm (#2886)
# What does this PR do?

This enhancement allows inference providers using LiteLLMOpenAIMixin to
validate model availability against LiteLLM's official provider model
listings, improving reliability and user experience when working with
different AI service providers.

- Add litellm_provider_name parameter to LiteLLMOpenAIMixin constructor
- Add check_model_availability method to LiteLLMOpenAIMixin using
litellm.models_by_provider
- Update Gemini, Groq, and SambaNova inference adapters to pass
litellm_provider_name

## Test Plan

standard CI.
2025-07-28 10:13:54 -07:00
..
apis feat(auth): API access control (#2822) 2025-07-24 15:30:48 -07:00
cli fix: separate build and run provider types (#2917) 2025-07-25 12:39:26 -07:00
distribution fix: switch refresh to debug log (#2933) 2025-07-28 10:02:54 -07:00
models chore(api): add mypy coverage to chat_format (#2654) 2025-07-18 11:56:53 +02:00
providers feat: implement dynamic model detection support for inference providers using litellm (#2886) 2025-07-28 10:13:54 -07:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
ui chore(deps): bump form-data from 4.0.2 to 4.0.4 in /llama_stack/ui (#2898) 2025-07-24 21:24:56 -04:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py fix: use logger for console telemetry (#2844) 2025-07-24 16:26:59 -04:00
schema_utils.py feat(auth): API access control (#2822) 2025-07-24 15:30:48 -07:00