llama-stack-mirror/distributions/meta-reference-gpu
Xi Yan 4d2bd2d39e
add more distro templates (#279)
* verify dockers

* together distro verified

* readme

* fireworks distro

* fireworks compose up

* fireworks verified
2024-10-21 18:15:08 -07:00
..
build.yaml add more distro templates (#279) 2024-10-21 18:15:08 -07:00
README.md add more distro templates (#279) 2024-10-21 18:15:08 -07:00
run.yaml llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00

Meta Reference Distribution

The llamastack/distribution-meta-reference-gpu distribution consists of the following provider configurations.

API Inference Agents Memory Safety Telemetry
Provider(s) meta-reference meta-reference meta-reference, remote::pgvector, remote::chroma meta-reference meta-reference

Start the Distribution (Single Node GPU)

Note

This assumes you have access to GPU to start a local server with access to your GPU.

Note

~/.llama should be the path containing downloaded weights of Llama models.

To download and start running a pre-built docker container, you may use the following commands:

docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-gpu --yaml_config /root/my-run.yaml

Alternative (Build and start distribution locally via conda)

  • You may checkout the Getting Started for more details on building locally via conda and starting up a meta-reference distribution.