From 2b4f37fd438f0c0d920b7ee7e466f4649dae8efd Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Thu, 20 Mar 2025 14:19:53 -0400 Subject: [PATCH] Fix pre-commit Signed-off-by: Yuan Tang --- llama_stack/templates/remote-vllm/doc_template.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/llama_stack/templates/remote-vllm/doc_template.md b/llama_stack/templates/remote-vllm/doc_template.md index 0ca7279a7..8abef18fb 100644 --- a/llama_stack/templates/remote-vllm/doc_template.md +++ b/llama_stack/templates/remote-vllm/doc_template.md @@ -48,6 +48,8 @@ docker run \ --port $INFERENCE_PORT ``` +Note that you'll also need to set `--enable-auto-tool-choice` and `--tool-call-parser` to [enable tool calling in vLLM](https://docs.vllm.ai/en/latest/features/tool_calling.html). + If you are using Llama Stack Safety / Shield APIs, then you will need to also run another instance of a vLLM with a corresponding safety model like `meta-llama/Llama-Guard-3-1B` using a script like: ```bash