llama-stack-mirror/llama_stack/providers
Matthew Farrellee f731f369a2
feat: add infrastructure to allow inference model discovery (#2710)
# What does this PR do?

inference providers each have a static list of supported / known models.
some also have access to a dynamic list of currently available models.
this change gives prodivers using the ModelRegistryHelper the ability to
combine their static and dynamic lists.

for instance, OpenAIInferenceAdapter can implement
```
   def query_available_models(self) -> list[str]:
      return [entry.model for entry in self.openai_client.models.list()]
```
to augment its static list w/ a current list from openai.

## Test Plan

scripts/unit-test.sh
2025-07-14 11:38:53 -07:00
..
inline fix(faiss): Delete file contents from kvstore (#2686) 2025-07-14 13:58:23 -04:00
registry fix: only load mcp when enabled in tool_group (#2621) 2025-07-04 20:27:05 +05:30
remote fix: Don't cache clients for passthrough auth providers (#2728) 2025-07-11 13:38:27 -07:00
utils feat: add infrastructure to allow inference model discovery (#2710) 2025-07-14 11:38:53 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00