llama-stack-mirror/docs/source/providers/inference/remote_ollama.md
2025-07-24 16:16:21 -04:00

24 lines
500 B
Markdown

---
orphan: true
---
# remote::ollama
## Description
Ollama inference provider for running local models through the Ollama runtime.
## Configuration
| Field | Type | Required | Default | Description |
|-------|------|----------|---------|-------------|
| `url` | `<class 'str'>` | No | http://localhost:11434 | |
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically |
## Sample Configuration
```yaml
url: ${env.OLLAMA_URL:=http://localhost:11434}
```