llama-stack-mirror/llama_stack
Matthew Farrellee f731f369a2
feat: add infrastructure to allow inference model discovery (#2710)
# What does this PR do?

inference providers each have a static list of supported / known models.
some also have access to a dynamic list of currently available models.
this change gives prodivers using the ModelRegistryHelper the ability to
combine their static and dynamic lists.

for instance, OpenAIInferenceAdapter can implement
```
   def query_available_models(self) -> list[str]:
      return [entry.model for entry in self.openai_client.models.list()]
```
to augment its static list w/ a current list from openai.

## Test Plan

scripts/unit-test.sh
2025-07-14 11:38:53 -07:00
..
apis feat: add input validation for search mode of rag query config (#2275) 2025-07-14 09:11:34 -04:00
cli chore(api): add mypy coverage to cli/stack (#2650) 2025-07-10 16:53:38 +02:00
distribution fix: container build on podman (#2723) 2025-07-11 16:25:33 +02:00
models chore(api): add mypy coverage to prompts (#2657) 2025-07-09 10:07:00 +02:00
providers feat: add infrastructure to allow inference model discovery (#2710) 2025-07-14 11:38:53 -07:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates chore: Adding unit tests for OpenAI vector stores and migrating SQLite-vec registry to kvstore (#2665) 2025-07-10 14:22:13 -04:00
ui feat: Add Vector stores UI (#2737) 2025-07-13 01:03:55 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00