[docs]: Export variables (e.g. INFERENCE_MODEL) in getting_started

The variable is used in python client examples. Unless it's exported,
the executed python process with the examples won't see it and fail.

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
This commit is contained in:
Ihar Hrachyshka 2025-02-04 18:25:45 -05:00
parent 3672e120ff
commit 0bec24c3db

View file

@ -42,8 +42,8 @@ To get started quickly, we provide various Docker images for the server componen
Lets setup some environment variables that we will use in the rest of the guide.
```bash
INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
LLAMA_STACK_PORT=8321
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
export LLAMA_STACK_PORT=8321
```
You can start the server using the following command: