From ca439788099ee042463f7593d4776d523ea4dfb6 Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Thu, 17 Apr 2025 20:28:24 -0400 Subject: [PATCH] docs: Add tips for debugging remote vLLM provider Signed-off-by: Yuan Tang --- llama_stack/templates/remote-vllm/doc_template.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/llama_stack/templates/remote-vllm/doc_template.md b/llama_stack/templates/remote-vllm/doc_template.md index fe50e9d49..c207d559b 100644 --- a/llama_stack/templates/remote-vllm/doc_template.md +++ b/llama_stack/templates/remote-vllm/doc_template.md @@ -31,7 +31,7 @@ The following environment variables can be configured: In the following sections, we'll use AMD, NVIDIA or Intel GPUs to serve as hardware accelerators for the vLLM server, which acts as both the LLM inference provider and the safety provider. Note that vLLM also [supports many other hardware accelerators](https://docs.vllm.ai/en/latest/getting_started/installation.html) and -that we only use GPUs here for demonstration purposes. +that we only use GPUs here for demonstration purposes. Note that if you are running into issues, there's a new environment variable `VLLM_DEBUG_LOG_API_SERVER_RESPONSE` (available in vLLM v0.8.3 and above) to enable log response from API server for debugging. ### Setting up vLLM server on AMD GPU