mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-20 11:47:00 +00:00
For self-hosted providers like Ollama (or vLLM), the backing server is running a set of models. That server should be treated as the source of truth and the Stack registry should just be a cache for those models. Of course, in production environments, you may not want this (because you know what model you are running statically) hence there's a config boolean to control this behavior. _This is part of a series of PRs aimed at removing the requirement of needing to set `INFERENCE_MODEL` env variables for running Llama Stack server._ ## Test Plan Copy and modify the starter.yaml template / config and enable `refresh_models: true, refresh_models_interval: 10` for the ollama provider. Then, run: ``` LLAMA_STACK_LOGGING=all=debug \ ENABLE_OLLAMA=ollama uv run llama stack run --image-type venv /tmp/starter.yaml ``` See a gargantuan amount of logs, but verify that the provider is periodically refreshing models. Stop and prune a model from ollama server, restart the server. Verify that the model goes away when I call `uv run llama-stack-client models list` |
||
---|---|---|
.. | ||
anthropic | ||
bedrock | ||
cerebras | ||
cerebras_openai_compat | ||
databricks | ||
fireworks | ||
fireworks_openai_compat | ||
gemini | ||
groq | ||
groq_openai_compat | ||
llama_openai_compat | ||
nvidia | ||
ollama | ||
openai | ||
passthrough | ||
runpod | ||
sambanova | ||
sambanova_openai_compat | ||
tgi | ||
together | ||
together_openai_compat | ||
vllm | ||
watsonx | ||
__init__.py |