llama-stack-mirror/docs/source/distributions
Ashwin Bharambe fd2aaf4978
fix: use OLLAMA_URL to activate Ollama provider in starter (#2963)
We tried to always keep Ollama enabled. However doing so makes the
provider implementation half-assed -- should it error when it cannot
connect to Ollama or not? What happens during periodic model refresh?
Etc. Instead do the same thing we do for vLLM -- use the `OLLAMA_URL` to
conditionally enable the provider.

## Test Plan

Run `uv run llama stack build --template starter --image-type venv
--run` with and without `OLLAMA_URL` set. Verify using
`llama-stack-client provider list` that ollama is correctly enabled.
2025-07-30 10:11:17 -07:00
..
eks fix: update k8s templates (#2645) 2025-07-08 15:57:01 -07:00
k8s chore: update k8s template (#2786) 2025-07-16 15:07:26 -07:00
ondevice_distro docs: Update links to Android Demo App (#2687) 2025-07-09 15:41:57 +02:00
remote_hosted_distro fix: replace all instances of --yaml-config with --config (#2196) 2025-05-16 14:31:12 -07:00
self_hosted_distro fix: use OLLAMA_URL to activate Ollama provider in starter (#2963) 2025-07-30 10:11:17 -07:00
building_distro.md docs: clarify run.yaml files are starting points for customization (#2746) 2025-07-14 09:53:13 -07:00
configuration.md feat(auth): API access control (#2822) 2025-07-24 15:30:48 -07:00
customizing_run_yaml.md docs: clarify run.yaml files are starting points for customization (#2746) 2025-07-14 09:53:13 -07:00
importing_as_library.md docs: update using llama stack as library docs (#2931) 2025-07-28 15:35:26 -07:00
index.md docs: Reorganize documentation on the webpage (#2651) 2025-07-15 14:19:35 -07:00
list_of_distributions.md fix: Restore the nvidia distro (#2639) 2025-07-07 15:50:05 -07:00
starting_llama_stack_server.md docs: Reorganize documentation on the webpage (#2651) 2025-07-15 14:19:35 -07:00