# Llama CLI Reference The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-toolchain` package. ### Subcommands 1. `download`: `llama` cli tools supports downloading the model from Meta or HuggingFace. 2. `model`: Lists available models and their properties. 3. `stack`: Allows you to build and run a Llama Stack server. You can read more about this [here](/docs/cli_reference.md#step-3-building-configuring-and-running-llama-stack-servers). ### Sample Usage ``` llama --help ```
usage: llama [-h] {download,model,stack} ... Welcome to the Llama CLI options: -h, --help show this help message and exit subcommands: {download,model,stack}## Step 1. Get the models You first need to have models downloaded locally. To download any model you need the **Model Descriptor**. This can be obtained by running the command ``` llama model list ``` You should see a table like this:
+---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Model Descriptor | HuggingFace Repo | Context Length | Hardware Requirements | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Meta-Llama3.1-8B | meta-llama/Meta-Llama-3.1-8B | 128K | 1 GPU, each >= 20GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Meta-Llama3.1-70B | meta-llama/Meta-Llama-3.1-70B | 128K | 8 GPUs, each >= 20GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Meta-Llama3.1-405B:bf16-mp8 | | 128K | 8 GPUs, each >= 120GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Meta-Llama3.1-405B | meta-llama/Meta-Llama-3.1-405B-FP8 | 128K | 8 GPUs, each >= 70GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Meta-Llama3.1-405B:bf16-mp16 | meta-llama/Meta-Llama-3.1-405B | 128K | 16 GPUs, each >= 70GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Meta-Llama3.1-8B-Instruct | meta-llama/Meta-Llama-3.1-8B-Instruct | 128K | 1 GPU, each >= 20GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Meta-Llama3.1-70B-Instruct | meta-llama/Meta-Llama-3.1-70B-Instruct | 128K | 8 GPUs, each >= 20GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Meta-Llama3.1-405B-Instruct:bf16-mp8 | | 128K | 8 GPUs, each >= 120GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Meta-Llama3.1-405B-Instruct | meta-llama/Meta-Llama-3.1-405B-Instruct-FP8 | 128K | 8 GPUs, each >= 70GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Meta-Llama3.1-405B-Instruct:bf16-mp16 | meta-llama/Meta-Llama-3.1-405B-Instruct | 128K | 16 GPUs, each >= 70GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Llama-Guard-3-8B | meta-llama/Llama-Guard-3-8B | 128K | 1 GPU, each >= 20GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Llama-Guard-3-8B:int8-mp1 | meta-llama/Llama-Guard-3-8B-INT8 | 128K | 1 GPU, each >= 10GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+ | Prompt-Guard-86M | meta-llama/Prompt-Guard-86M | 128K | 1 GPU, each >= 1GB VRAM | +---------------------------------------+---------------------------------------------+----------------+----------------------------+To download models, you can use the llama download command. Here is an example download command to get the 8B/70B Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/) ``` llama download --source meta --model-id Meta-Llama3.1-8B-Instruct --meta-url
usage: llama model [-h] {download,list,template,describe} ... Work with llama models options: -h, --help show this help message and exit model_subcommands: {download,list,template,describe}You can use the describe command to know more about a model: ``` llama model describe -m Meta-Llama3.1-8B-Instruct ``` ### 2.3 Describe
+-----------------------------+---------------------------------------+ | Model | Meta- | | | Llama3.1-8B-Instruct | +-----------------------------+---------------------------------------+ | HuggingFace ID | meta-llama/Meta-Llama-3.1-8B-Instruct | +-----------------------------+---------------------------------------+ | Description | Llama 3.1 8b instruct model | +-----------------------------+---------------------------------------+ | Context Length | 128K tokens | +-----------------------------+---------------------------------------+ | Weights format | bf16 | +-----------------------------+---------------------------------------+ | Model params.json | { | | | "dim": 4096, | | | "n_layers": 32, | | | "n_heads": 32, | | | "n_kv_heads": 8, | | | "vocab_size": 128256, | | | "ffn_dim_multiplier": 1.3, | | | "multiple_of": 1024, | | | "norm_eps": 1e-05, | | | "rope_theta": 500000.0, | | | "use_scaled_rope": true | | | } | +-----------------------------+---------------------------------------+ | Recommended sampling params | { | | | "strategy": "top_p", | | | "temperature": 1.0, | | | "top_p": 0.9, | | | "top_k": 0 | | | } | +-----------------------------+---------------------------------------+### 2.4 Template You can even run `llama model template` see all of the templates and their tokens: ``` llama model template ```
+-----------+---------------------------------+ | Role | Template Name | +-----------+---------------------------------+ | user | user-default | | assistant | assistant-builtin-tool-call | | assistant | assistant-custom-tool-call | | assistant | assistant-default | | system | system-builtin-and-custom-tools | | system | system-builtin-tools-only | | system | system-custom-tools-only | | system | system-default | | tool | tool-success | | tool | tool-failure | +-----------+---------------------------------+And fetch an example by passing it to `--name`: ``` llama model template --name tool-success ```
+----------+----------------------------------------------------------------+ | Name | tool-success | +----------+----------------------------------------------------------------+ | Template | <|start_header_id|>ipython<|end_header_id|> | | | | | | completed | | | [stdout]{"results":["something | | | something"]}[/stdout]<|eot_id|> | | | | +----------+----------------------------------------------------------------+ | Notes | Note ipython header and [stdout] | +----------+----------------------------------------------------------------+Or: ``` llama model template --name system-builtin-tools-only ```
+----------+--------------------------------------------+ | Name | system-builtin-tools-only | +----------+--------------------------------------------+ | Template | <|start_header_id|>system<|end_header_id|> | | | | | | Environment: ipython | | | Tools: brave_search, wolfram_alpha | | | | | | Cutting Knowledge Date: December 2023 | | | Today Date: 21 August 2024 | | | <|eot_id|> | | | | +----------+--------------------------------------------+ | Notes | | +----------+--------------------------------------------+These commands can help understand the model interface and how prompts / messages are formatted for various scenarios. **NOTE**: Outputs in terminal are color printed to show special tokens. ## Step 3: Building, and Configuring Llama Stack Distributions - Please see our [Getting Started](getting_started.md) guide for details. ### Step 3.1. Build In the following steps, imagine we'll be working with a `Meta-Llama3.1-8B-Instruct` model. We will name our build `8b-instruct` to help us remember the config. We will start build our distribution (in the form of a Conda environment, or Docker image). In this step, we will specify: - `name`: the name for our distribution (e.g. `8b-instruct`) - `image_type`: our build image type (`conda | docker`) - `distribution_spec`: our distribution specs for specifying API providers - `description`: a short description of the configurations for the distribution - `providers`: specifies the underlying implementation for serving each API endpoint - `image_type`: `conda` | `docker` to specify whether to build the distribution in the form of Docker image or Conda environment. #### Build a local distribution with conda The following command and specifications allows you to get started with building. ``` llama stack build