docs: Add instruction on enabling tool calling for remote vLLM

This commit is contained in:
Yuan Tang 2025-03-20 10:25:54 -04:00 committed by GitHub
parent af8b4484a3
commit 63cb0fbf54
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -61,6 +61,8 @@ docker run \
--port $INFERENCE_PORT
```
Note that you'll also need to set `--enable-auto-tool-choice` and `--tool-call-parser` to [enable tool calling in vLLM](https://docs.vllm.ai/en/latest/features/tool_calling.html).
If you are using Llama Stack Safety / Shield APIs, then you will need to also run another instance of a vLLM with a corresponding safety model like `meta-llama/Llama-Guard-3-1B` using a script like:
```bash