mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-07 02:58:21 +00:00
[docs]: Export variables (e.g. INFERENCE_MODEL) in getting_started
The variable is used in python client examples. Unless it's exported, the executed python process with the examples won't see it and fail. Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
This commit is contained in:
parent
3672e120ff
commit
0bec24c3db
1 changed files with 2 additions and 2 deletions
|
@ -42,8 +42,8 @@ To get started quickly, we provide various Docker images for the server componen
|
||||||
|
|
||||||
Lets setup some environment variables that we will use in the rest of the guide.
|
Lets setup some environment variables that we will use in the rest of the guide.
|
||||||
```bash
|
```bash
|
||||||
INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
|
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
|
||||||
LLAMA_STACK_PORT=8321
|
export LLAMA_STACK_PORT=8321
|
||||||
```
|
```
|
||||||
|
|
||||||
You can start the server using the following command:
|
You can start the server using the following command:
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue