mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-26 22:19:49 +00:00
This flips #2823 and #2805 by making the Stack periodically query the providers for models rather than the providers going behind the back and calling "register" on to the registry themselves. This also adds support for model listing for all other providers via `ModelRegistryHelper`. Once this is done, we do not need to manually list or register models via `run.yaml` and it will remove both noise and annoyance (setting `INFERENCE_MODEL` environment variables, for example) from the new user experience. In addition, it adds a configuration variable `allowed_models` which can be used to optionally restrict the set of models exposed from a provider.
478 B
478 B
remote::ollama
Description
Ollama inference provider for running local models through the Ollama runtime.
Configuration
Field | Type | Required | Default | Description |
---|---|---|---|---|
url |
<class 'str'> |
No | http://localhost:11434 | |
refresh_models |
<class 'bool'> |
No | False | Whether to refresh models periodically |
Sample Configuration
url: ${env.OLLAMA_URL:=http://localhost:11434}