From a2b7075fe91fed8ee030f5d8cc09b29da4ccb9d5 Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Fri, 18 Apr 2025 08:30:41 -0400 Subject: [PATCH] More specific guidance Signed-off-by: Yuan Tang --- docs/source/distributions/self_hosted_distro/remote-vllm.md | 2 +- llama_stack/templates/remote-vllm/doc_template.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/distributions/self_hosted_distro/remote-vllm.md b/docs/source/distributions/self_hosted_distro/remote-vllm.md index 4405605ea..46df56008 100644 --- a/docs/source/distributions/self_hosted_distro/remote-vllm.md +++ b/docs/source/distributions/self_hosted_distro/remote-vllm.md @@ -44,7 +44,7 @@ The following environment variables can be configured: In the following sections, we'll use AMD, NVIDIA or Intel GPUs to serve as hardware accelerators for the vLLM server, which acts as both the LLM inference provider and the safety provider. Note that vLLM also [supports many other hardware accelerators](https://docs.vllm.ai/en/latest/getting_started/installation.html) and -that we only use GPUs here for demonstration purposes. Note that if you are running into issues, there's a new environment variable `VLLM_DEBUG_LOG_API_SERVER_RESPONSE` (available in vLLM v0.8.3 and above) to enable log response from API server for debugging. +that we only use GPUs here for demonstration purposes. Note that if you run into issues, you can include the environment variable `--env VLLM_DEBUG_LOG_API_SERVER_RESPONSE=true` (available in vLLM v0.8.3 and above) in the `docker run` command to enable log response from API server for debugging. ### Setting up vLLM server on AMD GPU diff --git a/llama_stack/templates/remote-vllm/doc_template.md b/llama_stack/templates/remote-vllm/doc_template.md index c207d559b..3cede6080 100644 --- a/llama_stack/templates/remote-vllm/doc_template.md +++ b/llama_stack/templates/remote-vllm/doc_template.md @@ -31,7 +31,7 @@ The following environment variables can be configured: In the following sections, we'll use AMD, NVIDIA or Intel GPUs to serve as hardware accelerators for the vLLM server, which acts as both the LLM inference provider and the safety provider. Note that vLLM also [supports many other hardware accelerators](https://docs.vllm.ai/en/latest/getting_started/installation.html) and -that we only use GPUs here for demonstration purposes. Note that if you are running into issues, there's a new environment variable `VLLM_DEBUG_LOG_API_SERVER_RESPONSE` (available in vLLM v0.8.3 and above) to enable log response from API server for debugging. +that we only use GPUs here for demonstration purposes. Note that if you run into issues, you can include the environment variable `--env VLLM_DEBUG_LOG_API_SERVER_RESPONSE=true` (available in vLLM v0.8.3 and above) in the `docker run` command to enable log response from API server for debugging. ### Setting up vLLM server on AMD GPU