llama-stack-mirror/docs/docs/providers/inference/remote_ollama.mdx
Matthew Farrellee ce77c27ff8
chore: use remoteinferenceproviderconfig for remote inference providers (#3668)
# What does this PR do?

on the path to maintainable impls of inference providers. make all
configs instances of RemoteInferenceProviderConfig.

## Test Plan

ci
2025-10-03 08:48:42 -07:00

25 lines
783 B
Text

---
description: "Ollama inference provider for running local models through the Ollama runtime."
sidebar_label: Remote - Ollama
title: remote::ollama
---
# remote::ollama
## Description
Ollama inference provider for running local models through the Ollama runtime.
## Configuration
| Field | Type | Required | Default | Description |
|-------|------|----------|---------|-------------|
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
| `url` | `<class 'str'>` | No | http://localhost:11434 | |
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically |
## Sample Configuration
```yaml
url: ${env.OLLAMA_URL:=http://localhost:11434}
```