llama-stack-mirror/docs/source/providers/inference/remote_vllm.md
Ashwin Bharambe 199f859eec
feat(vllm): periodically refresh models (#2823)
Just like #2805 but for vLLM.

We also make VLLM_URL env variable optional (not required) -- if not
specified, the provider silently sits idle and yells eventually if
someone tries to call a completion on it. This is done so as to allow
this provider to be present in the `starter` distribution.

## Test Plan

Set up vLLM, copy the starter template and set `{ refresh_models: true,
refresh_models_interval: 10 }` for the vllm provider and then run:

```
ENABLE_VLLM=vllm VLLM_URL=http://localhost:8000/v1 \
  uv run llama stack run --image-type venv /tmp/starter.yaml
```

Verify that `llama-stack-client models list` brings up the model
correctly from vLLM.
2025-07-18 15:53:09 -07:00

958 B

remote::vllm

Description

Remote vLLM inference provider for connecting to vLLM servers.

Configuration

Field Type Required Default Description
url str | None No The URL for the vLLM model serving endpoint
max_tokens <class 'int'> No 4096 Maximum number of tokens to generate.
api_token str | None No fake The API token
tls_verify bool | str No True Whether to verify TLS certificates. Can be a boolean or a path to a CA certificate file.
refresh_models <class 'bool'> No False Whether to refresh models periodically
refresh_models_interval <class 'int'> No 300 Interval in seconds to refresh models

Sample Configuration

url: ${env.VLLM_URL:=}
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
api_token: ${env.VLLM_API_TOKEN:=fake}
tls_verify: ${env.VLLM_TLS_VERIFY:=true}