From d9f5beb15a2e05c27427c32f52a315947c54c4c9 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Wed, 19 Feb 2025 02:24:31 +0800 Subject: [PATCH] style: update download help text (#1135) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] Based on the cade: https://github.com/meta-llama/llama-stack/blob/6b1773d530e9f168f86beb83a6ec6af73555efe4/llama_stack/cli/download.py#L454 and the test, it can use comma to specify multiple model ids. So update the usage. ``` $ llama model download --source meta --model-id Llama3.2-1B,Llama3.2-3B Please provide the signed URL for model Llama3.2-1B you received via email after visiting https://www.llama.com/llama-downloads/ (e.g., https://llama3-1.llamameta.net/*?Policy...): Downloading checklist.chk ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 156/156 bytes - 0:00:00 Downloading tokenizer.model ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 2.2/2.2 MB - 0:00:00 Downloading params.json ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 220/220 bytes - 0:00:00 Downloading consolidated.00.pth ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 2.5/2.5 GB - 0:00:00 Successfully downloaded model to /Users/xx/.llama/checkpoints/Llama3.2-1B [Optionally] To run MD5 checksums, use the following command: llama model verify-download --model-id Llama3.2-1B Please provide the signed URL for model Llama3.2-3B you received via email after visiting https://www.llama.com/llama-downloads/ (e.g., https://llama3-1.llamameta.net/*?Policy...): Downloading checklist.chk ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 156/156 bytes - 0:00:00 Downloading tokenizer.model ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 2.2/2.2 MB - 0:00:00 Downloading params.json ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 220/220 bytes - 0:00:00 Downloading consolidated.00.pth ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 6.4/6.4 GB - 0:00:00 Successfully downloaded model to /Users/xx/.llama/checkpoints/Llama3.2-3B $ llama model download --source huggingface --model-id Llama3.2-1B,Llama3.2-3B original%2Fparams.json: 100%|██████████████████████████████████████████████████████████| 220/220 [00:00<00:00, 564kB/ Successfully downloaded model to /Users/xx/.llama/checkpoints/Llama3.2-1B ... tokenizer.json: 100%|█████████████████████████████████████████████████████████████| 9.09M/9.09M [00:00<00:00, 9.18MB/s] Successfully downloaded model to /Users/xxx/.llama/checkpoints/Llama3.2-3B before: $ llama model download --help --model-id MODEL_ID See `llama model list` or `llama model list --show-all` for the list of available models after: $ llama model download --help --model-id MODEL_ID See `llama model list` or `llama model list --show-all` for the list of available models. Specify multiple model IDs with commas, e.g. --model-id Llama3.2-1B,Llama3.2-3B ``` [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) Signed-off-by: reidliu Co-authored-by: reidliu --- llama_stack/cli/download.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/llama_stack/cli/download.py b/llama_stack/cli/download.py index 8afc6d31d..af86f7243 100644 --- a/llama_stack/cli/download.py +++ b/llama_stack/cli/download.py @@ -56,7 +56,7 @@ def setup_download_parser(parser: argparse.ArgumentParser) -> None: parser.add_argument( "--model-id", required=False, - help="See `llama model list` or `llama model list --show-all` for the list of available models", + help="See `llama model list` or `llama model list --show-all` for the list of available models. Specify multiple model IDs with commas, e.g. --model-id Llama3.2-1B,Llama3.2-3B", ) parser.add_argument( "--hf-token",