llama-stack/docs/source/distributions/self_hosted_distro/meta-reference-quantized-gpu.md
2025-03-23 15:48:14 -07:00

6.7 KiB

orphan
true

Meta Reference Quantized Distribution

:maxdepth: 2
:hidden:

self

The llamastack/distribution-meta-reference-quantized-gpu distribution consists of the following provider configurations:

API Provider(s)
agents inline::meta-reference
datasetio remote::huggingface, inline::localfs
inference inline::meta-reference-quantized
safety inline::llama-guard
telemetry inline::meta-reference
tool_runtime remote::brave-search, remote::tavily-search, inline::code-interpreter, inline::rag-runtime, remote::model-context-protocol
vector_io inline::faiss, remote::chromadb, remote::pgvector

The only difference vs. the meta-reference-gpu distribution is that it has support for more efficient inference -- with fp8, int4 quantization, etc.

Note that you need access to nvidia GPUs to run this distribution. This distribution is not compatible with CPU-only machines or machines with AMD GPUs.

Environment Variables

The following environment variables can be configured:

  • LLAMA_STACK_PORT: Port for the Llama Stack distribution server (default: 8321)
  • INFERENCE_MODEL: Inference model loaded into the Meta Reference server (default: meta-llama/Llama-3.2-3B-Instruct)
  • INFERENCE_CHECKPOINT_DIR: Directory containing the Meta Reference model checkpoint (default: null)

Prerequisite: Downloading Models

Please use llama model list --downloaded to check that you have llama model checkpoints downloaded in ~/.llama before proceeding. See installation guide here to download the models. Run llama model list to see the available models to download, and llama model download to download the checkpoints.

$ llama model list --downloaded
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ Model                                   ┃ Size     ┃ Modified Time       ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
│ Llama3.2-1B-Instruct:int4-qlora-eo8     │ 1.53 GB  │ 2025-02-26 11:22:28 │
├─────────────────────────────────────────┼──────────┼─────────────────────┤
│ Llama3.2-1B                             │ 2.31 GB  │ 2025-02-18 21:48:52 │
├─────────────────────────────────────────┼──────────┼─────────────────────┤
│ Prompt-Guard-86M                        │ 0.02 GB  │ 2025-02-26 11:29:28 │
├─────────────────────────────────────────┼──────────┼─────────────────────┤
│ Llama3.2-3B-Instruct:int4-spinquant-eo8 │ 3.69 GB  │ 2025-02-26 11:37:41 │
├─────────────────────────────────────────┼──────────┼─────────────────────┤
│ Llama3.2-3B                             │ 5.99 GB  │ 2025-02-18 21:51:26 │
├─────────────────────────────────────────┼──────────┼─────────────────────┤
│ Llama3.1-8B                             │ 14.97 GB │ 2025-02-16 10:36:37 │
├─────────────────────────────────────────┼──────────┼─────────────────────┤
│ Llama3.2-1B-Instruct:int4-spinquant-eo8 │ 1.51 GB  │ 2025-02-26 11:35:02 │
├─────────────────────────────────────────┼──────────┼─────────────────────┤
│ Llama-Guard-3-1B                        │ 2.80 GB  │ 2025-02-26 11:20:46 │
├─────────────────────────────────────────┼──────────┼─────────────────────┤
│ Llama-Guard-3-1B:int4                   │ 0.43 GB  │ 2025-02-26 11:33:33 │
└─────────────────────────────────────────┴──────────┴─────────────────────┘

Running the Distribution

You can do this via Conda (build code) or Docker which has a pre-built image.

Via Docker

This method allows you to get started quickly without having to build the distribution code.

LLAMA_STACK_PORT=8321
docker run \
  -it \
  --pull always \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  -v ~/.llama:/root/.llama \
  llamastack/distribution-meta-reference-quantized-gpu \
  --port $LLAMA_STACK_PORT \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct

If you are using Llama Stack Safety / Shield APIs, use:

docker run \
  -it \
  --pull always \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  -v ~/.llama:/root/.llama \
  llamastack/distribution-meta-reference-quantized-gpu \
  --port $LLAMA_STACK_PORT \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
  --env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B

Via Conda

Make sure you have done uv pip install llama-stack and have the Llama Stack CLI available.

llama stack build --template meta-reference-quantized-gpu --image-type conda
llama stack run distributions/meta-reference-quantized-gpu/run.yaml \
  --port $LLAMA_STACK_PORT \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct

If you are using Llama Stack Safety / Shield APIs, use:

llama stack run distributions/meta-reference-quantized-gpu/run-with-safety.yaml \
  --port $LLAMA_STACK_PORT \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
  --env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B