forked from phoenix-oss/llama-stack-mirror
Improve model download doc (#748)
## context The documentation around model download from meta source part https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/index.html#downloading-from-meta confused me and another colleague because we met [issue](https://github.com/meta-llama/llama-stack/issues/746) during downloading. After some debugging, I found that we need to quote META_URL in the command. To avoid other users have the same confusion, I updated the doc tor make it more clear ## test before  after 
This commit is contained in:
parent
ec8601ce88
commit
78727aad26
1 changed files with 5 additions and 5 deletions
|
@ -97,20 +97,20 @@ To download models, you can use the llama download command.
|
||||||
|
|
||||||
#### Downloading from [Meta](https://llama.meta.com/llama-downloads/)
|
#### Downloading from [Meta](https://llama.meta.com/llama-downloads/)
|
||||||
|
|
||||||
Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/)
|
Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/). Note: You need to quote the META_URL
|
||||||
|
|
||||||
Download the required checkpoints using the following commands:
|
Download the required checkpoints using the following commands:
|
||||||
```bash
|
```bash
|
||||||
# download the 8B model, this can be run on a single GPU
|
# download the 8B model, this can be run on a single GPU
|
||||||
llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url META_URL
|
llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url 'META_URL'
|
||||||
|
|
||||||
# you can also get the 70B model, this will require 8 GPUs however
|
# you can also get the 70B model, this will require 8 GPUs however
|
||||||
llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url META_URL
|
llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url 'META_URL'
|
||||||
|
|
||||||
# llama-agents have safety enabled by default. For this, you will need
|
# llama-agents have safety enabled by default. For this, you will need
|
||||||
# safety models -- Llama-Guard and Prompt-Guard
|
# safety models -- Llama-Guard and Prompt-Guard
|
||||||
llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL
|
llama download --source meta --model-id Prompt-Guard-86M --meta-url 'META_URL'
|
||||||
llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL
|
llama download --source meta --model-id Llama-Guard-3-1B --meta-url 'META_URL'
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Downloading from [Hugging Face](https://huggingface.co/meta-llama)
|
#### Downloading from [Hugging Face](https://huggingface.co/meta-llama)
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue