2.2 KiB
Meta Reference Quantized Distribution
:maxdepth: 2
:hidden:
self
The llamastack/distribution-meta-reference-quantized-gpu
distribution consists of the following provider configurations.
API | Inference | Agents | Memory | Safety | Telemetry |
---|---|---|---|---|---|
Provider(s) | meta-reference-quantized | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference |
The only difference vs. the meta-reference-gpu
distribution is that it has support for more efficient inference -- with fp8, int4 quantization, etc.
Step 0. Prerequisite - Downloading Models
Please make sure you have llama model checkpoints downloaded in ~/.llama
before proceeding. See installation guide here to download the models.
$ ls ~/.llama/checkpoints
Llama3.2-3B-Instruct:int4-qlora-eo8
Step 1. Start the Distribution
(Option 1) Start with Docker
$ cd distributions/meta-reference-quantized-gpu && docker compose up
Note
This assumes you have access to GPU to start a local server with access to your GPU.
Note
~/.llama
should be the path containing downloaded weights of Llama models.
This will download and start running a pre-built docker container. Alternatively, you may use the following commands:
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-quantized-gpu --yaml_config /root/my-run.yaml
(Option 2) Start with Conda
-
Install the
llama
CLI. See CLI Reference -
Build the
meta-reference-quantized-gpu
distribution
$ llama stack build --template meta-reference-quantized-gpu --image-type conda
- Start running distribution
$ cd distributions/meta-reference-quantized-gpu
$ llama stack run ./run.yaml