mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-07 22:34:37 +00:00
# What does this PR do? * Removes a bunch of distros * Removed distros were added into the "starter" distribution * Doc for "starter" has been added * Partially reverts https://github.com/meta-llama/llama-stack/pull/2482 since inference providers are disabled by default and can be turned on manually via env variable. * Disables safety in starter distro Closes: https://github.com/meta-llama/llama-stack/issues/2502. ~Needs: https://github.com/meta-llama/llama-stack/pull/2482 for Ollama to work properly in the CI.~ TODO: - [ ] We can only update `install.sh` when we get a new release. - [x] Update providers documentation - [ ] Update notebooks to reference starter instead of ollama Signed-off-by: Sébastien Han <seb@redhat.com>
384 B
384 B
remote::ollama
Description
Ollama inference provider for running local models through the Ollama runtime.
Configuration
Field | Type | Required | Default | Description |
---|---|---|---|---|
url |
<class 'str'> |
No | http://localhost:11434 |
Sample Configuration
url: ${env.OLLAMA_URL:=http://localhost:11434}