mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-18 12:49:47 +00:00
refine
This commit is contained in:
parent
cd1fc4fd17
commit
78b6518b2c
5 changed files with 26 additions and 26 deletions
|
|
@ -63,7 +63,7 @@ docker run \
|
|||
-v ~/.llama:/root/.llama \
|
||||
llamastack/distribution-meta-reference-gpu \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=Llama3.2-3B-Instruct
|
||||
--env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
|
||||
```
|
||||
|
||||
If you are using Llama Stack Safety / Shield APIs, use:
|
||||
|
|
@ -75,8 +75,8 @@ docker run \
|
|||
-v ~/.llama:/root/.llama \
|
||||
llamastack/distribution-meta-reference-gpu \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=Llama3.2-3B-Instruct \
|
||||
--env SAFETY_MODEL=Llama-Guard-3-1B
|
||||
--env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
|
||||
--env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
|
||||
```
|
||||
|
||||
### Via Conda
|
||||
|
|
@ -87,7 +87,7 @@ Make sure you have done `pip install llama-stack` and have the Llama Stack CLI a
|
|||
llama stack build --template meta-reference-gpu --image-type conda
|
||||
llama stack run distributions/meta-reference-gpu/run.yaml \
|
||||
--port 5001 \
|
||||
--env INFERENCE_MODEL=Llama3.2-3B-Instruct
|
||||
--env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
|
||||
```
|
||||
|
||||
If you are using Llama Stack Safety / Shield APIs, use:
|
||||
|
|
@ -95,6 +95,6 @@ If you are using Llama Stack Safety / Shield APIs, use:
|
|||
```bash
|
||||
llama stack run distributions/meta-reference-gpu/run-with-safety.yaml \
|
||||
--port 5001 \
|
||||
--env INFERENCE_MODEL=Llama3.2-3B-Instruct \
|
||||
--env SAFETY_MODEL=Llama-Guard-3-1B
|
||||
--env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
|
||||
--env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
|
||||
```
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue