mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-14 13:02:36 +00:00
# What does this PR do? on the path to maintainable impls of inference providers. make all configs instances of RemoteInferenceProviderConfig. ## Test Plan ci
25 lines
783 B
Text
25 lines
783 B
Text
---
|
|
description: "Ollama inference provider for running local models through the Ollama runtime."
|
|
sidebar_label: Remote - Ollama
|
|
title: remote::ollama
|
|
---
|
|
|
|
# remote::ollama
|
|
|
|
## Description
|
|
|
|
Ollama inference provider for running local models through the Ollama runtime.
|
|
|
|
## Configuration
|
|
|
|
| Field | Type | Required | Default | Description |
|
|
|-------|------|----------|---------|-------------|
|
|
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
|
| `url` | `<class 'str'>` | No | http://localhost:11434 | |
|
|
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically |
|
|
|
|
## Sample Configuration
|
|
|
|
```yaml
|
|
url: ${env.OLLAMA_URL:=http://localhost:11434}
|
|
```
|