llama-stack-mirror/distributions/meta-reference-quantized-gpu
Xi Yan a70a4706fc
update distributions compose/readme (#338)
* readme updates

* quantied compose

* dell tgi

* config update
2024-10-28 16:34:43 -07:00
..
build.yaml fix broken --list-templates with adding build.yaml files for packaging (#327) 2024-10-25 12:51:22 -07:00
compose.yaml update distributions compose/readme (#338) 2024-10-28 16:34:43 -07:00
README.md Add a meta-reference-quantized-gpu distribution 2024-10-23 21:45:50 -07:00
run.yaml Small updates to quantization config 2024-10-24 12:08:56 -07:00

Meta Reference Quantized Distribution

The llamastack/distribution-meta-reference-quantized-gpu distribution consists of the following provider configurations.

API Inference Agents Memory Safety Telemetry
Provider(s) meta-reference-quantized meta-reference meta-reference, remote::pgvector, remote::chroma meta-reference meta-reference

The only difference vs. the meta-reference-gpu distribution is that it has support for more efficient inference -- with fp8, int4 quantization, etc.

Start the Distribution (Single Node GPU)

Note

This assumes you have access to GPU to start a local server with access to your GPU.

Note

~/.llama should be the path containing downloaded weights of Llama models.

To download and start running a pre-built docker container, you may use the following commands:

docker run -it -p 5000:5000 -v ~/.llama:/root/.llama \
  -v ./run.yaml:/root/my-run.yaml \
  --gpus=all \
  distribution-meta-reference-quantized-gpu \
  --yaml_config /root/my-run.yaml

Alternative (Build and start distribution locally via conda)

  • You may checkout the Getting Started for more details on building locally via conda and starting up the distribution.