llama-stack-mirror/docs/source/distributions/index.md
Ashwin Bharambe cf079a22a0 Plurals
2024-11-20 23:24:59 -08:00

5 KiB

Llama Stack Distributions

:maxdepth: 2
:hidden:

self_hosted_distro/index
remote_hosted_distro/index
ondevice_distro/index

Introduction

Llama Stack Distributions are pre-built Docker containers/Conda environments that assemble APIs and Providers to provide a consistent whole to the end application developer. These distributions allow you to mix-and-match providers - some could be backed by local code and some could be remote. This flexibility enables you to choose the optimal setup for your use case, such as serving a small model locally while using a cloud provider for larger models, all while maintaining a consistent API interface for your application.

Decide Your Build Type

There are two ways to start a Llama Stack:

  • Docker: we provide a number of pre-built Docker containers allowing you to get started instantly. If you are focused on application development, we recommend this option.
  • Conda: the llama CLI provides a simple set of commands to build, configure and run a Llama Stack server containing the exact combination of providers you wish. We have provided various templates to make getting started easier.

Both of these provide options to run model inference using our reference implementations, Ollama, TGI, vLLM or even remote providers like Fireworks, Together, Bedrock, etc.

Decide Your Inference Provider

Running inference on the underlying Llama model is one of the most critical requirements. Depending on what hardware you have available, you have various options. Note that each option have different necessary prerequisites.

Please see our pages in detail for the types of distributions we offer:

  1. Self-Hosted Distributions: If you want to run Llama Stack inference on your local machine.
  2. Remote-Hosted Distributions: If you want to connect to a remote hosted inference provider.
  3. On-device Distributions: If you want to run Llama Stack inference on your iOS / Android device.

Building Your Own Distribution

Prerequisites

$ git clone git@github.com:meta-llama/llama-stack.git

Starting the Distribution

::::{tab-set}

:::{tab-item} meta-reference-gpu

System Requirements

Access to Single-Node GPU to start a local server.

Downloading Models

Please make sure you have Llama model checkpoints downloaded in ~/.llama before proceeding. See installation guide here to download the models.

$ ls ~/.llama/checkpoints
Llama3.1-8B           Llama3.2-11B-Vision-Instruct  Llama3.2-1B-Instruct  Llama3.2-90B-Vision-Instruct  Llama-Guard-3-8B
Llama3.1-8B-Instruct  Llama3.2-1B                   Llama3.2-3B-Instruct  Llama-Guard-3-1B              Prompt-Guard-86M

:::

:::{tab-item} vLLM

System Requirements

Access to Single-Node GPU to start a vLLM server. :::

:::{tab-item} tgi

System Requirements

Access to Single-Node GPU to start a TGI server. :::

:::{tab-item} ollama

System Requirements

Access to Single-Node CPU/GPU able to run ollama. :::

:::{tab-item} together

System Requirements

Access to Single-Node CPU with Together hosted endpoint via API_KEY from together.ai. :::

:::{tab-item} fireworks

System Requirements

Access to Single-Node CPU with Fireworks hosted endpoint via API_KEY from fireworks.ai. :::

::::

::::{tab-set} :::{tab-item} meta-reference-gpu

:::{tab-item} vLLM

:::{tab-item} tgi

:::{tab-item} ollama

:::{tab-item} together

:::{tab-item} fireworks

::::

Troubleshooting

  • If you encounter any issues, search through our GitHub Issues, or file an new issue.
  • Use --port <PORT> flag to use a different port number. For docker run, update the -p <PORT>:<PORT> flag.