llama-stack-mirror/docs/source/providers/inference/remote_ollama.md
2025-07-18 11:11:05 -07:00

21 lines
582 B
Markdown

# remote::ollama
## Description
Ollama inference provider for running local models through the Ollama runtime.
## Configuration
| Field | Type | Required | Default | Description |
|-------|------|----------|---------|-------------|
| `url` | `<class 'str'>` | No | http://localhost:11434 | |
| `refresh_models` | `<class 'bool'>` | No | False | refresh and re-register models periodically |
| `refresh_models_interval` | `<class 'int'>` | No | 300 | interval in seconds to refresh models |
## Sample Configuration
```yaml
url: ${env.OLLAMA_URL:=http://localhost:11434}
```