default to 8321 everywhere

This commit is contained in:
Hardik Shah 2025-03-20 15:45:00 -07:00
parent 581e8ae562
commit 41f0421dbf
56 changed files with 2352 additions and 2305 deletions

View file

@ -80,7 +80,7 @@ Now you are ready to run Llama Stack with TGI as the inference provider. You can
This method allows you to get started quickly without having to build the distribution code.
```bash
LLAMA_STACK_PORT=5001
LLAMA_STACK_PORT=8321
docker run \
-it \
--pull always \

View file

@ -129,7 +129,7 @@ def get_distribution_template() -> DistributionTemplate:
},
run_config_env_vars={
"LLAMA_STACK_PORT": (
"5001",
"8321",
"Port for the Llama Stack distribution server",
),
"INFERENCE_MODEL": (