From 0bec24c3dbec454d329ca758adf0cdace6324e35 Mon Sep 17 00:00:00 2001 From: Ihar Hrachyshka Date: Tue, 4 Feb 2025 18:25:45 -0500 Subject: [PATCH] [docs]: Export variables (e.g. INFERENCE_MODEL) in getting_started The variable is used in python client examples. Unless it's exported, the executed python process with the examples won't see it and fail. Signed-off-by: Ihar Hrachyshka --- docs/source/getting_started/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index a2a38e6b4..d62186a47 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -42,8 +42,8 @@ To get started quickly, we provide various Docker images for the server componen Lets setup some environment variables that we will use in the rest of the guide. ```bash -INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" -LLAMA_STACK_PORT=8321 +export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" +export LLAMA_STACK_PORT=8321 ``` You can start the server using the following command: