llama-stack-mirror/docs/source
Ashwin Bharambe fd2aaf4978
fix: use OLLAMA_URL to activate Ollama provider in starter (#2963)
We tried to always keep Ollama enabled. However doing so makes the
provider implementation half-assed -- should it error when it cannot
connect to Ollama or not? What happens during periodic model refresh?
Etc. Instead do the same thing we do for vLLM -- use the `OLLAMA_URL` to
conditionally enable the provider.

## Test Plan

Run `uv run llama stack build --template starter --image-type venv
--run` with and without `OLLAMA_URL` set. Verify using
`llama-stack-client provider list` that ollama is correctly enabled.
2025-07-30 10:11:17 -07:00
..
advanced_apis docs: Reorganize documentation on the webpage (#2651) 2025-07-15 14:19:35 -07:00
apis feat: Bring Your Own API (BYOA) (#2228) 2025-07-24 13:41:14 -07:00
building_applications docs: Document use cases for Responses and Agents APIs (#2756) 2025-07-24 12:20:04 -04:00
concepts docs: update list of apis (#2697) 2025-07-24 09:50:14 -07:00
contributing chore: create OpenAIMixin for inference providers with an OpenAI-compat API that need to implement openai_* methods (#2835) 2025-07-23 06:49:40 -04:00
deploying chore: update k8s template (#2786) 2025-07-16 15:07:26 -07:00
distributions fix: use OLLAMA_URL to activate Ollama provider in starter (#2963) 2025-07-30 10:11:17 -07:00
getting_started fix: use OLLAMA_URL to activate Ollama provider in starter (#2963) 2025-07-30 10:11:17 -07:00
providers feat(openai): add configurable base_url support with OPENAI_BASE_URL env var (#2919) 2025-07-28 10:16:02 -07:00
references docs: update outdated llama stack client documentation (#2758) 2025-07-15 11:49:59 -07:00
conf.py docs: Reorganize documentation on the webpage (#2651) 2025-07-15 14:19:35 -07:00
index.md docs: Reorganize documentation on the webpage (#2651) 2025-07-15 14:19:35 -07:00