From 78727aad26415f88987d5902a51fb6bee5623ab9 Mon Sep 17 00:00:00 2001 From: Botao Chen Date: Mon, 13 Jan 2025 00:39:12 -0800 Subject: [PATCH] Improve model download doc (#748) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## context The documentation around model download from meta source part https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/index.html#downloading-from-meta confused me and another colleague because we met [issue](https://github.com/meta-llama/llama-stack/issues/746) during downloading. After some debugging, I found that we need to quote META_URL in the command. To avoid other users have the same confusion, I updated the doc tor make it more clear ## test before ![Screenshot 2025-01-12 at 11 48 37 PM](https://github.com/user-attachments/assets/960a8793-4d32-44b0-a099-6214be7921b6) after ![Screenshot 2025-01-12 at 11 40 02 PM](https://github.com/user-attachments/assets/8dfe5e36-bdba-47ef-a251-ec337d12e2be) --- .../references/llama_cli_reference/download_models.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/source/references/llama_cli_reference/download_models.md b/docs/source/references/llama_cli_reference/download_models.md index 3007aa88d..3c40f1392 100644 --- a/docs/source/references/llama_cli_reference/download_models.md +++ b/docs/source/references/llama_cli_reference/download_models.md @@ -97,20 +97,20 @@ To download models, you can use the llama download command. #### Downloading from [Meta](https://llama.meta.com/llama-downloads/) -Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/) +Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/). Note: You need to quote the META_URL Download the required checkpoints using the following commands: ```bash # download the 8B model, this can be run on a single GPU -llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url META_URL +llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url 'META_URL' # you can also get the 70B model, this will require 8 GPUs however -llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url META_URL +llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url 'META_URL' # llama-agents have safety enabled by default. For this, you will need # safety models -- Llama-Guard and Prompt-Guard -llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL -llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL +llama download --source meta --model-id Prompt-Guard-86M --meta-url 'META_URL' +llama download --source meta --model-id Llama-Guard-3-1B --meta-url 'META_URL' ``` #### Downloading from [Hugging Face](https://huggingface.co/meta-llama)