# Llama CLI Reference The `llama` CLI tool helps you setup and use the Llama Stack & agentic systems. It should be available on your path after installing the `llama-stack` package. ### Subcommands 1. `download`: `llama` cli tools supports downloading the model from Meta or Hugging Face. 2. `model`: Lists available models and their properties. 3. `stack`: Allows you to build and run a Llama Stack server. You can read more about this [here](cli_reference.md#step-3-building-and-configuring-llama-stack-distributions). ### Sample Usage ``` llama --help ```
usage: llama [-h] {download,model,stack} ...
Welcome to the Llama CLI
options:
-h, --help show this help message and exit
subcommands:
{download,model,stack}
## Step 1. Get the models
You first need to have models downloaded locally.
To download any model you need the **Model Descriptor**.
This can be obtained by running the command
```
llama model list
```
You should see a table like this:
+----------------------------------+------------------------------------------+----------------+ | Model Descriptor | Hugging Face Repo | Context Length | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-8B | meta-llama/Llama-3.1-8B | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-70B | meta-llama/Llama-3.1-70B | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-405B:bf16-mp8 | meta-llama/Llama-3.1-405B | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-405B | meta-llama/Llama-3.1-405B-FP8 | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-405B:bf16-mp16 | meta-llama/Llama-3.1-405B | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-8B-Instruct | meta-llama/Llama-3.1-8B-Instruct | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-70B-Instruct | meta-llama/Llama-3.1-70B-Instruct | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-405B-Instruct:bf16-mp8 | meta-llama/Llama-3.1-405B-Instruct | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-405B-Instruct | meta-llama/Llama-3.1-405B-Instruct-FP8 | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-405B-Instruct:bf16-mp16 | meta-llama/Llama-3.1-405B-Instruct | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.2-1B | meta-llama/Llama-3.2-1B | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.2-3B | meta-llama/Llama-3.2-3B | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.2-11B-Vision | meta-llama/Llama-3.2-11B-Vision | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.2-90B-Vision | meta-llama/Llama-3.2-90B-Vision | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.2-1B-Instruct | meta-llama/Llama-3.2-1B-Instruct | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.2-3B-Instruct | meta-llama/Llama-3.2-3B-Instruct | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.2-11B-Vision-Instruct | meta-llama/Llama-3.2-11B-Vision-Instruct | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama3.2-90B-Vision-Instruct | meta-llama/Llama-3.2-90B-Vision-Instruct | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama-Guard-3-11B-Vision | meta-llama/Llama-Guard-3-11B-Vision | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama-Guard-3-1B:int4-mp1 | meta-llama/Llama-Guard-3-1B-INT4 | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama-Guard-3-1B | meta-llama/Llama-Guard-3-1B | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama-Guard-3-8B | meta-llama/Llama-Guard-3-8B | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama-Guard-3-8B:int8-mp1 | meta-llama/Llama-Guard-3-8B-INT8 | 128K | +----------------------------------+------------------------------------------+----------------+ | Prompt-Guard-86M | meta-llama/Prompt-Guard-86M | 128K | +----------------------------------+------------------------------------------+----------------+ | Llama-Guard-2-8B | meta-llama/Llama-Guard-2-8B | 4K | +----------------------------------+------------------------------------------+----------------+To download models, you can use the llama download command. #### Downloading from [Meta](https://llama.meta.com/llama-downloads/) Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/) Download the required checkpoints using the following commands: ```bash # download the 8B model, this can be run on a single GPU llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url META_URL # you can also get the 70B model, this will require 8 GPUs however llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url META_URL # llama-agents have safety enabled by default. For this, you will need # safety models -- Llama-Guard and Prompt-Guard llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL ``` #### Downloading from [Hugging Face](https://huggingface.co/meta-llama) Essentially, the same commands above work, just replace `--source meta` with `--source huggingface`. ```bash llama download --source huggingface --model-id Llama3.1-8B-Instruct --hf-token
usage: llama model [-h] {download,list,prompt-format,describe} ...
Work with llama models
options:
-h, --help show this help message and exit
model_subcommands:
{download,list,prompt-format,describe}
You can use the describe command to know more about a model:
```
llama model describe -m Llama3.2-3B-Instruct
```
### 2.3 Describe
+-----------------------------+----------------------------------+
| Model | Llama3.2-3B-Instruct |
+-----------------------------+----------------------------------+
| Hugging Face ID | meta-llama/Llama-3.2-3B-Instruct |
+-----------------------------+----------------------------------+
| Description | Llama 3.2 3b instruct model |
+-----------------------------+----------------------------------+
| Context Length | 128K tokens |
+-----------------------------+----------------------------------+
| Weights format | bf16 |
+-----------------------------+----------------------------------+
| Model params.json | { |
| | "dim": 3072, |
| | "n_layers": 28, |
| | "n_heads": 24, |
| | "n_kv_heads": 8, |
| | "vocab_size": 128256, |
| | "ffn_dim_multiplier": 1.0, |
| | "multiple_of": 256, |
| | "norm_eps": 1e-05, |
| | "rope_theta": 500000.0, |
| | "use_scaled_rope": true |
| | } |
+-----------------------------+----------------------------------+
| Recommended sampling params | { |
| | "strategy": "top_p", |
| | "temperature": 1.0, |
| | "top_p": 0.9, |
| | "top_k": 0 |
| | } |
+-----------------------------+----------------------------------+
### 2.4 Prompt Format
You can even run `llama model prompt-format` see all of the templates and their tokens:
```
llama model prompt-format -m Llama3.2-3B-Instruct
```

You will be shown a Markdown formatted description of the model interface and how prompts / messages are formatted for various scenarios.
**NOTE**: Outputs in terminal are color printed to show special tokens.
## Step 3: Building, and Configuring Llama Stack Distributions
- Please see our [Getting Started](getting_started.md) guide for more details on how to build and start a Llama Stack distribution.
### Step 3.1 Build
In the following steps, imagine we'll be working with a `Llama3.1-8B-Instruct` model. We will name our build `8b-instruct` to help us remember the config. We will start build our distribution (in the form of a Conda environment, or Docker image). In this step, we will specify:
- `name`: the name for our distribution (e.g. `8b-instruct`)
- `image_type`: our build image type (`conda | docker`)
- `distribution_spec`: our distribution specs for specifying API providers
- `description`: a short description of the configurations for the distribution
- `providers`: specifies the underlying implementation for serving each API endpoint
- `image_type`: `conda` | `docker` to specify whether to build the distribution in the form of Docker image or Conda environment.
At the end of build command, we will generate `