llama-stack-mirror/llama_stack/providers/remote/inference
Ashwin Bharambe 68a2dfbad7
feat(ollama): periodically refresh models (#2805)
For self-hosted providers like Ollama (or vLLM), the backing server is
running a set of models. That server should be treated as the source of
truth and the Stack registry should just be a cache for those models. Of
course, in production environments, you may not want this (because you
know what model you are running statically) hence there's a config
boolean to control this behavior.

_This is part of a series of PRs aimed at removing the requirement of
needing to set `INFERENCE_MODEL` env variables for running Llama Stack
server._

## Test Plan

Copy and modify the starter.yaml template / config and enable
`refresh_models: true, refresh_models_interval: 10` for the ollama
provider. Then, run:

```
LLAMA_STACK_LOGGING=all=debug \
  ENABLE_OLLAMA=ollama uv run llama stack run --image-type venv /tmp/starter.yaml
```

See a gargantuan amount of logs, but verify that the provider is
periodically refreshing models. Stop and prune a model from ollama
server, restart the server. Verify that the model goes away when I call
`uv run llama-stack-client models list`
2025-07-18 12:20:36 -07:00
..
anthropic ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
bedrock ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
cerebras ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
cerebras_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
databricks ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
fireworks ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
fireworks_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
gemini ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
groq fix: Don't cache clients for passthrough auth providers (#2728) 2025-07-11 13:38:27 -07:00
groq_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
llama_openai_compat feat: create dynamic model registration for OpenAI and Llama compat remote inference providers (#2745) 2025-07-16 12:49:38 -04:00
nvidia feat: allow dynamic model registration for nvidia inference provider (#2726) 2025-07-17 12:11:30 -07:00
ollama feat(ollama): periodically refresh models (#2805) 2025-07-18 12:20:36 -07:00
openai feat: create dynamic model registration for OpenAI and Llama compat remote inference providers (#2745) 2025-07-16 12:49:38 -04:00
passthrough feat: consolidate most distros into "starter" (#2516) 2025-07-04 15:58:03 +02:00
runpod ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
sambanova fix: sambanova shields and model validation (#2693) 2025-07-11 16:29:15 -04:00
sambanova_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
tgi feat: consolidate most distros into "starter" (#2516) 2025-07-04 15:58:03 +02:00
together fix: Don't cache clients for passthrough auth providers (#2728) 2025-07-11 13:38:27 -07:00
together_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
vllm refactor(env)!: enhanced environment variable substitution (#2490) 2025-06-26 08:20:08 +05:30
watsonx fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00