llama-stack-mirror/llama_stack
Ashwin Bharambe fd2aaf4978
fix: use OLLAMA_URL to activate Ollama provider in starter (#2963)
We tried to always keep Ollama enabled. However doing so makes the
provider implementation half-assed -- should it error when it cannot
connect to Ollama or not? What happens during periodic model refresh?
Etc. Instead do the same thing we do for vLLM -- use the `OLLAMA_URL` to
conditionally enable the provider.

## Test Plan

Run `uv run llama stack build --template starter --image-type venv
--run` with and without `OLLAMA_URL` set. Verify using
`llama-stack-client provider list` that ollama is correctly enabled.
2025-07-30 10:11:17 -07:00
..
apis feat: add base64 encoded PDF support for OpenAI Chat Completions (#2881) 2025-07-29 06:23:41 -04:00
cli fix: use same image_name logic for build & run config (#2949) 2025-07-29 12:54:21 -07:00
distribution fix(library_client): improve initialization error handling and prevent AttributeError (#2944) 2025-07-30 11:58:47 -04:00
models chore(api): add mypy coverage to chat_format (#2654) 2025-07-18 11:56:53 +02:00
providers fix: Update SFTConfig parameter to fix CI and Post Training Workflow (#2948) 2025-07-29 11:14:04 -07:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates fix: use OLLAMA_URL to activate Ollama provider in starter (#2963) 2025-07-30 10:11:17 -07:00
testing feat(tests): introduce inference record/replay to increase test reliability (#2941) 2025-07-29 12:41:31 -07:00
ui fix: random breakage in llama_stack/ui/package.json 2025-07-29 12:31:29 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py fix: use logger for console telemetry (#2844) 2025-07-24 16:26:59 -04:00
schema_utils.py feat(auth): API access control (#2822) 2025-07-24 15:30:48 -07:00