name: vllm distribution_spec: description: Like local, but use vLLM for running LLM inference providers: inference: vllm memory: meta-reference safety: meta-reference agents: meta-reference telemetry: meta-reference image_type: conda