Fix issue when generating distros (#755)

Addressed comment
https://github.com/meta-llama/llama-stack/pull/723#issuecomment-2581902075.

cc @yanxi0830 

I am not 100% sure if the diff is correct though but this is the result
of running `python llama_stack/scripts/distro_codegen.py`.

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
This commit is contained in:
Yuan Tang 2025-01-15 08:34:08 -05:00 committed by GitHub
parent 52a21ce78f
commit 300e6e2702
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 133 additions and 143 deletions

View file

@ -134,7 +134,7 @@ def get_distribution_template() -> DistributionTemplate:
"Inference model loaded into the vLLM server",
),
"VLLM_URL": (
"http://host.docker.internal:5100}/v1",
"http://host.docker.internal:5100/v1",
"URL of the vLLM server with the main inference model",
),
"MAX_TOKENS": (