llama-stack-mirror/docs/source/providers/inference/remote_ollama.md
Ashwin Bharambe 1463b79218
feat(registry): make the Stack query providers for model listing (#2862)
This flips #2823 and #2805 by making the Stack periodically query the
providers for models rather than the providers going behind the back and
calling "register" on to the registry themselves. This also adds support
for model listing for all other providers via `ModelRegistryHelper`.
Once this is done, we do not need to manually list or register models
via `run.yaml` and it will remove both noise and annoyance (setting
`INFERENCE_MODEL` environment variables, for example) from the new user
experience.

In addition, it adds a configuration variable `allowed_models` which can
be used to optionally restrict the set of models exposed from a
provider.
2025-07-24 10:39:53 -07:00

478 B

remote::ollama

Description

Ollama inference provider for running local models through the Ollama runtime.

Configuration

Field Type Required Default Description
url <class 'str'> No http://localhost:11434
refresh_models <class 'bool'> No False Whether to refresh models periodically

Sample Configuration

url: ${env.OLLAMA_URL:=http://localhost:11434}