mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-24 05:28:04 +00:00
* Removes a bunch of distros * Removed distros were added into the "starter" distribution * Doc for "starter" has been added * Partially reverts https://github.com/meta-llama/llama-stack/pull/2482 since inference providers are disabled by default and can be turned on manually via env variable. * Disables safety in starter distro Closes: #2502 Signed-off-by: Sébastien Han <seb@redhat.com>
384 B
384 B
remote::ollama
Description
Ollama inference provider for running local models through the Ollama runtime.
Configuration
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
url |
<class 'str'> |
No | http://localhost:11434 |
Sample Configuration
url: ${env.OLLAMA_URL:=http://localhost:11434}