fix: Ollama should be optional in starter distro

Our starter distro required Ollama to be running (and a large list of
models available in that Ollama) to successfully start. This adjusts
things so that Ollama does not have to be running to use the starter
template / distro.

To accomplish this, a few changes were needed:

* The Ollama provider is now configurable whether it raises an
Exception or just logs a warning when it cannot reach the Ollama
server on startup. The default is to raise an exception (same as
previous behavior), but in the starter template we adjust this to just
log a warning so that we can bring the stack up without needing a
running Ollama server.

* The starter template no longer specifies a default list of models
for Ollama, as any models specified there need to actually be pulled
and available in Ollama. Instead, it adds a new
`OLLAMA_INFERENCE_MODEL` environment variable where users can provide
an optional model to register with the Ollama provider on
startup. Additional models can also be registered via the typical
`models.register(...)` at runtime.

* The vLLM template was adjusted to also allow an optional
`VLLM_INFERENCE_MODEL` specified on startup, so that the behavior
between vLLM and Ollama was consistent here to make it easy to get up
and running quickly.

* The default vector store was changed from sqlite-vec to
faiss. sqlite-vec can enabled via setting the `ENABLE_SQLITE_VEC`
environment variable, like we do for chromadb and pgvector. This is
due to sqlite-vec not shipping proper arm64 binaries, like we
previously fixed in #1530 for the ollama distribution.

With this change, the following scenarios now work with the starter
template that did not before:

* no Ollama running
* Ollama running but not all of the Llama models pulled locally
* Ollama running with a custom model registered on startup
* vLLM running with a custom model registered on startup
* running the starter template on linux/arm64, like when running
containers on Mac without rosetta emulation

Signed-off-by: Ben Browning <bbrownin@redhat.com>
This commit is contained in:
Ben Browning 2025-06-20 10:36:51 -04:00
parent cfee63bd0d
commit 404708e99d
9 changed files with 89 additions and 175 deletions

View file

@ -16,6 +16,7 @@ from llama_stack.providers.inline.files.localfs.config import LocalfsFilesImplCo
from llama_stack.providers.inline.inference.sentence_transformers import (
SentenceTransformersInferenceConfig,
)
from llama_stack.providers.inline.vector_io.faiss.config import FaissVectorIOConfig
from llama_stack.providers.inline.vector_io.sqlite_vec.config import (
SQLiteVectorIOConfig,
)
@ -36,9 +37,6 @@ from llama_stack.providers.remote.inference.groq.models import (
MODEL_ENTRIES as GROQ_MODEL_ENTRIES,
)
from llama_stack.providers.remote.inference.ollama.config import OllamaImplConfig
from llama_stack.providers.remote.inference.ollama.models import (
MODEL_ENTRIES as OLLAMA_MODEL_ENTRIES,
)
from llama_stack.providers.remote.inference.openai.config import OpenAIConfig
from llama_stack.providers.remote.inference.openai.models import (
MODEL_ENTRIES as OPENAI_MODEL_ENTRIES,
@ -85,8 +83,15 @@ def get_inference_providers() -> tuple[list[Provider], dict[str, list[ProviderMo
),
(
"ollama",
OLLAMA_MODEL_ENTRIES,
OllamaImplConfig.sample_run_config(),
[
ProviderModelEntry(
provider_model_id="${env.OLLAMA_INFERENCE_MODEL:__disabled__}",
model_type=ModelType.llm,
),
],
OllamaImplConfig.sample_run_config(
url="${env.OLLAMA_URL:http://localhost:11434}", raise_on_connect_error=False
),
),
(
"anthropic",
@ -110,7 +115,12 @@ def get_inference_providers() -> tuple[list[Provider], dict[str, list[ProviderMo
),
(
"vllm",
[],
[
ProviderModelEntry(
provider_model_id="${env.VLLM_INFERENCE_MODEL:__disabled__}",
model_type=ModelType.llm,
),
],
VLLMInferenceAdapterConfig.sample_run_config(
url="${env.VLLM_URL:http://localhost:8000/v1}",
),
@ -153,7 +163,12 @@ def get_distribution_template() -> DistributionTemplate:
vector_io_providers = [
Provider(
provider_id="sqlite-vec",
provider_id="faiss",
provider_type="inline::faiss",
config=FaissVectorIOConfig.sample_run_config(f"~/.llama/distributions/{name}"),
),
Provider(
provider_id="${env.ENABLE_SQLITE_VEC+sqlite-vec}",
provider_type="inline::sqlite-vec",
config=SQLiteVectorIOConfig.sample_run_config(f"~/.llama/distributions/{name}"),
),
@ -257,7 +272,19 @@ def get_distribution_template() -> DistributionTemplate:
),
"VLLM_URL": (
"http://localhost:8000/v1",
"VLLM URL",
"vLLM URL",
),
"VLLM_INFERENCE_MODEL": (
"",
"Optional vLLM Inference Model to register on startup",
),
"OLLAMA_URL": (
"http://localhost:11434",
"Ollama URL",
),
"OLLAMA_INFERENCE_MODEL": (
"",
"Optional Ollama Inference Model to register on startup",
),
},
)