# remote::ollama ## Description Ollama inference provider for running local models through the Ollama runtime. ## Configuration | Field | Type | Required | Default | Description | |-------|------|----------|---------|-------------| | `url` | `` | No | http://localhost:11434 | | | `refresh_models` | `` | No | False | refresh and re-register models periodically | | `refresh_models_interval` | `` | No | 300 | interval in seconds to refresh models | ## Sample Configuration ```yaml url: ${env.OLLAMA_URL:=http://localhost:11434} ```