Improve model download doc (#748)

## context
The documentation around model download from meta source part
https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/index.html#downloading-from-meta
confused me and another colleague because we met
[issue](https://github.com/meta-llama/llama-stack/issues/746) during
downloading.

After some debugging, I found that we need to quote META_URL in the
command. To avoid other users have the same confusion, I updated the doc
tor make it more clear

## test 

before 
![Screenshot 2025-01-12 at 11 48
37 PM](https://github.com/user-attachments/assets/960a8793-4d32-44b0-a099-6214be7921b6)

after
![Screenshot 2025-01-12 at 11 40
02 PM](https://github.com/user-attachments/assets/8dfe5e36-bdba-47ef-a251-ec337d12e2be)
This commit is contained in:
Botao Chen 2025-01-13 00:39:12 -08:00 committed by GitHub
parent ec8601ce88
commit 78727aad26
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -97,20 +97,20 @@ To download models, you can use the llama download command.
#### Downloading from [Meta](https://llama.meta.com/llama-downloads/) #### Downloading from [Meta](https://llama.meta.com/llama-downloads/)
Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/) Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/). Note: You need to quote the META_URL
Download the required checkpoints using the following commands: Download the required checkpoints using the following commands:
```bash ```bash
# download the 8B model, this can be run on a single GPU # download the 8B model, this can be run on a single GPU
llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url META_URL llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url 'META_URL'
# you can also get the 70B model, this will require 8 GPUs however # you can also get the 70B model, this will require 8 GPUs however
llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url META_URL llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url 'META_URL'
# llama-agents have safety enabled by default. For this, you will need # llama-agents have safety enabled by default. For this, you will need
# safety models -- Llama-Guard and Prompt-Guard # safety models -- Llama-Guard and Prompt-Guard
llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL llama download --source meta --model-id Prompt-Guard-86M --meta-url 'META_URL'
llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL llama download --source meta --model-id Llama-Guard-3-1B --meta-url 'META_URL'
``` ```
#### Downloading from [Hugging Face](https://huggingface.co/meta-llama) #### Downloading from [Hugging Face](https://huggingface.co/meta-llama)