llama-stack-mirror/llama_stack/providers/remote/inference/ollama
Ashwin Bharambe 199f859eec
feat(vllm): periodically refresh models (#2823)
Just like #2805 but for vLLM.

We also make VLLM_URL env variable optional (not required) -- if not
specified, the provider silently sits idle and yells eventually if
someone tries to call a completion on it. This is done so as to allow
this provider to be present in the `starter` distribution.

## Test Plan

Set up vLLM, copy the starter template and set `{ refresh_models: true,
refresh_models_interval: 10 }` for the vllm provider and then run:

```
ENABLE_VLLM=vllm VLLM_URL=http://localhost:8000/v1 \
  uv run llama stack run --image-type venv /tmp/starter.yaml
```

Verify that `llama-stack-client models list` brings up the model
correctly from vLLM.
2025-07-18 15:53:09 -07:00
..
__init__.py fix: Ollama should be optional in starter distro (#2482) 2025-06-25 15:54:00 +02:00
config.py feat(ollama): periodically refresh models (#2805) 2025-07-18 12:20:36 -07:00
models.py fix: Safety in starter (#2731) 2025-07-14 15:07:40 -07:00
ollama.py feat(vllm): periodically refresh models (#2823) 2025-07-18 15:53:09 -07:00