From 16b7fa46149f83b2fb8df0f00ae981b612f43e86 Mon Sep 17 00:00:00 2001 From: Xi Yan Date: Tue, 5 Nov 2024 15:21:13 -0800 Subject: [PATCH] quantized model docs --- .../meta-reference-quantized-gpu.md | 39 ++++++++++++++----- 1 file changed, 30 insertions(+), 9 deletions(-) diff --git a/docs/source/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.md b/docs/source/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.md index 0c05a13c1..fb5ebf4e9 100644 --- a/docs/source/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.md +++ b/docs/source/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.md @@ -9,7 +9,20 @@ The `llamastack/distribution-meta-reference-quantized-gpu` distribution consists The only difference vs. the `meta-reference-gpu` distribution is that it has support for more efficient inference -- with fp8, int4 quantization, etc. -### Start the Distribution (Single Node GPU) +### Step 0. Prerequisite - Downloading Models +Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models. + +``` +$ ls ~/.llama/checkpoints +Llama3.1-8B Llama3.2-11B-Vision-Instruct Llama3.2-1B-Instruct Llama3.2-90B-Vision-Instruct Llama-Guard-3-8B +Llama3.1-8B-Instruct Llama3.2-1B Llama3.2-3B-Instruct Llama-Guard-3-1B Prompt-Guard-86M +``` + +### Step 1. Start the Distribution +#### (Option 1) Start with Docker +``` +$ cd distributions/meta-reference-quantized-gpu && docker compose up +``` > [!NOTE] > This assumes you have access to GPU to start a local server with access to your GPU. @@ -19,16 +32,24 @@ The only difference vs. the `meta-reference-gpu` distribution is that it has sup > `~/.llama` should be the path containing downloaded weights of Llama models. -To download and start running a pre-built docker container, you may use the following commands: +This will download and start running a pre-built docker container. Alternatively, you may use the following commands: ``` -docker run -it -p 5000:5000 -v ~/.llama:/root/.llama \ - -v ./run.yaml:/root/my-run.yaml \ - --gpus=all \ - distribution-meta-reference-quantized-gpu \ - --yaml_config /root/my-run.yaml +docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-quantized-gpu --yaml_config /root/my-run.yaml ``` -### Alternative (Build and start distribution locally via conda) +#### (Option 2) Start with Conda -- You may checkout the [Getting Started](../../docs/getting_started.md) for more details on building locally via conda and starting up the distribution. +1. Install the `llama` CLI. See [CLI Reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html) + +2. Build the `meta-reference-quantized-gpu` distribution + +``` +$ llama stack build --template meta-reference-quantized-gpu --image-type conda +``` + +3. Start running distribution +``` +$ cd distributions/meta-reference-quantized-gpu +$ llama stack run ./run.yaml +``` \ No newline at end of file