llama-stack-mirror/llama_stack
Matthew Farrellee d035fe93c6 feat: add infrastructure to allow inference model discovery
inference providers each have a static list of supported / known models. some also
have access to a dynamic list of currently available models. this change gives
prodivers using the ModelRegistryHelper the ability to combine their static and
dynamic lists.

for instance, OpenAIInferenceAdapter can implement
```
   def query_available_models(self) -> list[str]:
      return [entry.model for entry in self.openai_client.models.list()]
```
to augment its static list w/ a current list from openai.
2025-07-09 12:52:30 -04:00
..
apis chore(api): add mypy coverage to apis (#2648) 2025-07-09 12:55:16 +02:00
cli fix: use --template flag for server (#2643) 2025-07-08 00:48:50 -07:00
distribution feat(auth): support github tokens (#2509) 2025-07-08 11:02:36 -07:00
models chore(api): add mypy coverage to prompts (#2657) 2025-07-09 10:07:00 +02:00
providers feat: add infrastructure to allow inference model discovery 2025-07-09 12:52:30 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates fix: Restore the nvidia distro (#2639) 2025-07-07 15:50:05 -07:00
ui feat(auth,ui): support github sign-in in the UI (#2545) 2025-07-08 11:02:57 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00