diff --git a/docs/source/getting_started/distributions/self_hosted_distro/remote-vllm.md b/docs/source/getting_started/distributions/self_hosted_distro/remote-vllm.md index db067c196..884e9a13c 100644 --- a/docs/source/getting_started/distributions/self_hosted_distro/remote-vllm.md +++ b/docs/source/getting_started/distributions/self_hosted_distro/remote-vllm.md @@ -107,7 +107,7 @@ docker run \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ --env VLLM_URL=http://host.docker.internal:$INFERENCE_PORT/v1 \ --env SAFETY_MODEL=$SAFETY_MODEL \ - --env VLLM_SAFETY_URL=http://host.docker.internal:$SAFETY_PORT/v1 + --env SAFETY_VLLM_URL=http://host.docker.internal:$SAFETY_PORT/v1 ``` @@ -126,7 +126,7 @@ llama stack build --template remote-vllm --image-type conda llama stack run ./run.yaml \ --port $LLAMA_STACK_PORT \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ - --env VLLM_URL=http://127.0.0.1:$INFERENCE_PORT/v1 + --env VLLM_URL=http://localhost:$INFERENCE_PORT/v1 ``` If you are using Llama Stack Safety / Shield APIs, use: @@ -138,7 +138,7 @@ export SAFETY_MODEL=meta-llama/Llama-Guard-3-1B llama stack run ./run-with-safety.yaml \ --port $LLAMA_STACK_PORT \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ - --env VLLM_URL=http://127.0.0.1:$INFERENCE_PORT/v1 \ + --env VLLM_URL=http://localhost:$INFERENCE_PORT/v1 \ --env SAFETY_MODEL=$SAFETY_MODEL \ - --env VLLM_SAFETY_URL=http://127.0.0.1:$SAFETY_PORT/v1 + --env SAFETY_VLLM_URL=http://localhost:$SAFETY_PORT/v1 ``` diff --git a/llama_stack/templates/remote-vllm/doc_template.md b/llama_stack/templates/remote-vllm/doc_template.md index 88f5a6e2e..aca4fc643 100644 --- a/llama_stack/templates/remote-vllm/doc_template.md +++ b/llama_stack/templates/remote-vllm/doc_template.md @@ -99,7 +99,7 @@ docker run \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ --env VLLM_URL=http://host.docker.internal:$INFERENCE_PORT/v1 \ --env SAFETY_MODEL=$SAFETY_MODEL \ - --env VLLM_SAFETY_URL=http://host.docker.internal:$SAFETY_PORT/v1 + --env SAFETY_VLLM_URL=http://host.docker.internal:$SAFETY_PORT/v1 ``` @@ -118,7 +118,7 @@ llama stack build --template remote-vllm --image-type conda llama stack run ./run.yaml \ --port $LLAMA_STACK_PORT \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ - --env VLLM_URL=http://127.0.0.1:$INFERENCE_PORT/v1 + --env VLLM_URL=http://localhost:$INFERENCE_PORT/v1 ``` If you are using Llama Stack Safety / Shield APIs, use: @@ -130,7 +130,7 @@ export SAFETY_MODEL=meta-llama/Llama-Guard-3-1B llama stack run ./run-with-safety.yaml \ --port $LLAMA_STACK_PORT \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ - --env VLLM_URL=http://127.0.0.1:$INFERENCE_PORT/v1 \ + --env VLLM_URL=http://localhost:$INFERENCE_PORT/v1 \ --env SAFETY_MODEL=$SAFETY_MODEL \ - --env VLLM_SAFETY_URL=http://127.0.0.1:$SAFETY_PORT/v1 + --env SAFETY_VLLM_URL=http://localhost:$SAFETY_PORT/v1 ```