llama-stack/llama_stack/templates/meta-reference-quantized-gpu/doc_template.md
Xi Yan e2054d53e4
Fix issue 586 (#594)
# What does this PR do?

- Addresses issue (#586 )


## Test Plan

```
python llama_stack/scripts/distro_codegen.py
```


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-10 10:22:04 -08:00

2.8 KiB

orphan
true

Meta Reference Quantized Distribution

:maxdepth: 2
:hidden:

self

The llamastack/distribution-{{ name }} distribution consists of the following provider configurations:

{{ providers_table }}

The only difference vs. the meta-reference-gpu distribution is that it has support for more efficient inference -- with fp8, int4 quantization, etc.

Note that you need access to nvidia GPUs to run this distribution. This distribution is not compatible with CPU-only machines or machines with AMD GPUs.

{% if run_config_env_vars %}

Environment Variables

The following environment variables can be configured:

{% for var, (default_value, description) in run_config_env_vars.items() %}

  • {{ var }}: {{ description }} (default: {{ default_value }}) {% endfor %} {% endif %}

Prerequisite: Downloading Models

Please make sure you have llama model checkpoints downloaded in ~/.llama before proceeding. See installation guide here to download the models. Run llama model list to see the available models to download, and llama model download to download the checkpoints.

$ ls ~/.llama/checkpoints
Llama3.1-8B           Llama3.2-11B-Vision-Instruct  Llama3.2-1B-Instruct  Llama3.2-90B-Vision-Instruct  Llama-Guard-3-8B
Llama3.1-8B-Instruct  Llama3.2-1B                   Llama3.2-3B-Instruct  Llama-Guard-3-1B              Prompt-Guard-86M

Running the Distribution

You can do this via Conda (build code) or Docker which has a pre-built image.

Via Docker

This method allows you to get started quickly without having to build the distribution code.

LLAMA_STACK_PORT=5001
docker run \
  -it \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  -v ~/.llama:/root/.llama \
  llamastack/distribution-{{ name }} \
  --port $LLAMA_STACK_PORT \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct

If you are using Llama Stack Safety / Shield APIs, use:

docker run \
  -it \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  -v ~/.llama:/root/.llama \
  llamastack/distribution-{{ name }} \
  --port $LLAMA_STACK_PORT \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
  --env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B

Via Conda

Make sure you have done pip install llama-stack and have the Llama Stack CLI available.

llama stack build --template {{ name }} --image-type conda
llama stack run distributions/{{ name }}/run.yaml \
  --port $LLAMA_STACK_PORT \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct

If you are using Llama Stack Safety / Shield APIs, use:

llama stack run distributions/{{ name }}/run-with-safety.yaml \
  --port $LLAMA_STACK_PORT \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
  --env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B