mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
docs: fix typo (#1390)
# What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) --------- Signed-off-by: reidliu <reid201711@gmail.com> Co-authored-by: reidliu <reid201711@gmail.com>
This commit is contained in:
parent
d57cffb495
commit
cb085d56c6
2 changed files with 2 additions and 2 deletions
|
@ -35,7 +35,7 @@ The following environment variables can be configured:
|
||||||
|
|
||||||
- `LLAMA_STACK_PORT`: Port for the Llama Stack distribution server (default: `5001`)
|
- `LLAMA_STACK_PORT`: Port for the Llama Stack distribution server (default: `5001`)
|
||||||
- `INFERENCE_MODEL`: Inference model loaded into the TGI server (default: `meta-llama/Llama-3.2-3B-Instruct`)
|
- `INFERENCE_MODEL`: Inference model loaded into the TGI server (default: `meta-llama/Llama-3.2-3B-Instruct`)
|
||||||
- `TGI_URL`: URL of the TGI server with the main inference model (default: `http://127.0.0.1:8080}/v1`)
|
- `TGI_URL`: URL of the TGI server with the main inference model (default: `http://127.0.0.1:8080/v1`)
|
||||||
- `TGI_SAFETY_URL`: URL of the TGI server with the safety model (default: `http://127.0.0.1:8081/v1`)
|
- `TGI_SAFETY_URL`: URL of the TGI server with the safety model (default: `http://127.0.0.1:8081/v1`)
|
||||||
- `SAFETY_MODEL`: Name of the safety (Llama-Guard) model to use (default: `meta-llama/Llama-Guard-3-1B`)
|
- `SAFETY_MODEL`: Name of the safety (Llama-Guard) model to use (default: `meta-llama/Llama-Guard-3-1B`)
|
||||||
|
|
||||||
|
|
|
@ -137,7 +137,7 @@ def get_distribution_template() -> DistributionTemplate:
|
||||||
"Inference model loaded into the TGI server",
|
"Inference model loaded into the TGI server",
|
||||||
),
|
),
|
||||||
"TGI_URL": (
|
"TGI_URL": (
|
||||||
"http://127.0.0.1:8080}/v1",
|
"http://127.0.0.1:8080/v1",
|
||||||
"URL of the TGI server with the main inference model",
|
"URL of the TGI server with the main inference model",
|
||||||
),
|
),
|
||||||
"TGI_SAFETY_URL": (
|
"TGI_SAFETY_URL": (
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue