mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
quantized model docs
This commit is contained in:
parent
4dd01eeaa1
commit
16b7fa4614
1 changed files with 30 additions and 9 deletions
|
@ -9,7 +9,20 @@ The `llamastack/distribution-meta-reference-quantized-gpu` distribution consists
|
||||||
|
|
||||||
The only difference vs. the `meta-reference-gpu` distribution is that it has support for more efficient inference -- with fp8, int4 quantization, etc.
|
The only difference vs. the `meta-reference-gpu` distribution is that it has support for more efficient inference -- with fp8, int4 quantization, etc.
|
||||||
|
|
||||||
### Start the Distribution (Single Node GPU)
|
### Step 0. Prerequisite - Downloading Models
|
||||||
|
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ ls ~/.llama/checkpoints
|
||||||
|
Llama3.1-8B Llama3.2-11B-Vision-Instruct Llama3.2-1B-Instruct Llama3.2-90B-Vision-Instruct Llama-Guard-3-8B
|
||||||
|
Llama3.1-8B-Instruct Llama3.2-1B Llama3.2-3B-Instruct Llama-Guard-3-1B Prompt-Guard-86M
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 1. Start the Distribution
|
||||||
|
#### (Option 1) Start with Docker
|
||||||
|
```
|
||||||
|
$ cd distributions/meta-reference-quantized-gpu && docker compose up
|
||||||
|
```
|
||||||
|
|
||||||
> [!NOTE]
|
> [!NOTE]
|
||||||
> This assumes you have access to GPU to start a local server with access to your GPU.
|
> This assumes you have access to GPU to start a local server with access to your GPU.
|
||||||
|
@ -19,16 +32,24 @@ The only difference vs. the `meta-reference-gpu` distribution is that it has sup
|
||||||
> `~/.llama` should be the path containing downloaded weights of Llama models.
|
> `~/.llama` should be the path containing downloaded weights of Llama models.
|
||||||
|
|
||||||
|
|
||||||
To download and start running a pre-built docker container, you may use the following commands:
|
This will download and start running a pre-built docker container. Alternatively, you may use the following commands:
|
||||||
|
|
||||||
```
|
```
|
||||||
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama \
|
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-quantized-gpu --yaml_config /root/my-run.yaml
|
||||||
-v ./run.yaml:/root/my-run.yaml \
|
|
||||||
--gpus=all \
|
|
||||||
distribution-meta-reference-quantized-gpu \
|
|
||||||
--yaml_config /root/my-run.yaml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Alternative (Build and start distribution locally via conda)
|
#### (Option 2) Start with Conda
|
||||||
|
|
||||||
- You may checkout the [Getting Started](../../docs/getting_started.md) for more details on building locally via conda and starting up the distribution.
|
1. Install the `llama` CLI. See [CLI Reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html)
|
||||||
|
|
||||||
|
2. Build the `meta-reference-quantized-gpu` distribution
|
||||||
|
|
||||||
|
```
|
||||||
|
$ llama stack build --template meta-reference-quantized-gpu --image-type conda
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Start running distribution
|
||||||
|
```
|
||||||
|
$ cd distributions/meta-reference-quantized-gpu
|
||||||
|
$ llama stack run ./run.yaml
|
||||||
|
```
|
Loading…
Add table
Add a link
Reference in a new issue