Composable building blocks to build Llama Apps https://llama-stack.readthedocs.io
Find a file
Ashwin Bharambe 6bb57e72a7
Remove "routing_table" and "routing_key" concepts for the user (#201)
This PR makes several core changes to the developer experience surrounding Llama Stack.

Background: PR #92 introduced the notion of "routing" to the Llama Stack. It introduces three object types: (1) models, (2) shields and (3) memory banks. Each of these objects can be associated with a distinct provider. So you can get model A to be inferenced locally while model B, C can be inference remotely (e.g.)

However, this had a few drawbacks:

you could not address the provider instances -- i.e., if you configured "meta-reference" with a given model, you could not assign an identifier to this instance which you could re-use later.
the above meant that you could not register a "routing_key" (e.g. model) dynamically and say "please use this existing provider I have already configured" for a new model.
the terms "routing_table" and "routing_key" were exposed directly to the user. in my view, this is way too much overhead for a new user (which almost everyone is.) people come to the stack wanting to do ML and encounter a completely unexpected term.
What this PR does: This PR structures the run config with only a single prominent key:

- providers
Providers are instances of configured provider types. Here's an example which shows two instances of the remote::tgi provider which are serving two different models.

providers:
  inference:
  - provider_id: foo
    provider_type: remote::tgi
    config: { ... }
  - provider_id: bar
    provider_type: remote::tgi
    config: { ... }
Secondly, the PR adds dynamic registration of { models | shields | memory_banks } to the API surface. The distribution still acts like a "routing table" (as previously) except that it asks the backing providers for a listing of these objects. For example it asks a TGI or Ollama inference adapter what models it is serving. Only the models that are being actually served can be requested by the user for inference. Otherwise, the Stack server will throw an error.

When dynamically registering these objects, you can use the provider IDs shown above. Info about providers can be obtained using the Api.inspect set of endpoints (/providers, /routes, etc.)

The above examples shows the correspondence between inference providers and models registry items. Things work similarly for the safety <=> shields and memory <=> memory_banks pairs.

Registry: This PR also makes it so that Providers need to implement additional methods for registering and listing objects. For example, each Inference provider is now expected to implement the ModelsProtocolPrivate protocol (naming is not great!) which consists of two methods

register_model
list_models
The goal is to inform the provider that a certain model needs to be supported so the provider can make any relevant backend changes if needed (or throw an error if the model cannot be supported.)

There are many other cleanups included some of which are detailed in a follow-up comment.
2024-10-10 10:24:13 -07:00
.github add CODEOWNERS file 2024-09-11 11:40:37 -07:00
docs Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
llama_stack Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
rfcs Update RFC-0001-llama-stack.md (#134) 2024-09-27 09:14:36 -07:00
tests Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
.flake8 Add a test runner and 2 very simple tests for agents 2024-09-19 12:22:48 -07:00
.gitignore Add .idea to .gitignore (#216) 2024-10-07 19:38:43 -07:00
.gitmodules Update LocalInference to use public repos 2024-09-25 11:05:51 -07:00
.pre-commit-config.yaml fix routing table key list 2024-10-02 18:23:31 -07:00
CODE_OF_CONDUCT.md Initial commit 2024-07-23 08:32:33 -07:00
CONTRIBUTING.md refactor docs (#209) 2024-10-07 10:21:26 -07:00
LICENSE Update LICENSE (#47) 2024-08-29 07:39:50 -07:00
MANIFEST.in update MANIFEST 2024-09-18 18:58:50 -07:00
pyproject.toml Initial commit 2024-07-23 08:32:33 -07:00
README.md Update README.md 2024-10-07 11:24:27 -07:00
requirements.txt Bump version to 0.0.40 2024-10-04 09:33:43 -07:00
SECURITY.md Create SECURITY.md 2024-10-08 13:30:40 -04:00
setup.py Add classifiers in setup.py (#217) 2024-10-08 06:55:16 -07:00

Llama Stack

PyPI version PyPI - Downloads Discord

This repository contains the Llama Stack API specifications as well as API Providers and Llama Stack Distributions.

The Llama Stack defines and standardizes the building blocks needed to bring generative AI applications to market. These blocks span the entire development lifecycle: from model training and fine-tuning, through product evaluation, to building and running AI agents in production. Beyond definition, we are building providers for the Llama Stack APIs. These were developing open-source versions and partnering with providers, ensuring developers can assemble AI solutions using consistent, interlocking pieces across platforms. The ultimate goal is to accelerate innovation in the AI space.

The Stack APIs are rapidly improving, but still very much work in progress and we invite feedback as well as direct contributions.

APIs

The Llama Stack consists of the following set of APIs:

  • Inference
  • Safety
  • Memory
  • Agentic System
  • Evaluation
  • Post Training
  • Synthetic Data Generation
  • Reward Scoring

Each of the APIs themselves is a collection of REST endpoints.

API Providers

A Provider is what makes the API real -- they provide the actual implementation backing the API.

As an example, for Inference, we could have the implementation be backed by open source libraries like [ torch | vLLM | TensorRT ] as possible options.

A provider can also be just a pointer to a remote REST service -- for example, cloud providers or dedicated inference providers could serve these APIs.

Llama Stack Distribution

A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.

Supported Llama Stack Implementations

API Providers

API Provider Builder Environments Agents Inference Memory Safety Telemetry
Meta Reference Single Node ✔️ ✔️ ✔️ ✔️ ✔️
Fireworks Hosted ✔️ ✔️ ✔️
AWS Bedrock Hosted ✔️ ✔️
Together Hosted ✔️ ✔️ ✔️
Ollama Single Node ✔️
TGI Hosted and Single Node ✔️
Chroma Single Node ✔️
PG Vector Single Node ✔️
PyTorch ExecuTorch On-device iOS ✔️ ✔️

Distributions

Distribution Provider Docker Inference Memory Safety Telemetry
Meta Reference Local GPU, Local CPU ✔️ ✔️ ✔️ ✔️
Dell-TGI Local TGI + Chroma ✔️ ✔️ ✔️ ✔️

Installation

You can install this repository as a package with pip install llama-stack

If you want to install from source:

mkdir -p ~/local
cd ~/local
git clone git@github.com:meta-llama/llama-stack.git

conda create -n stack python=3.10
conda activate stack

cd llama-stack
$CONDA_PREFIX/bin/pip install -e .

Documentations

The llama CLI makes it easy to work with the Llama Stack set of tools. Please find the following docs for details.

  • CLI reference
    • Guide using llama CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution.
  • Getting Started
    • Guide to build and run a Llama Stack server.
  • Contributing

Llama Stack Client SDK

Language Client SDK Package
Python llama-stack-client-python PyPI version
Swift llama-stack-client-swift
Node llama-stack-client-node NPM version
Kotlin llama-stack-client-kotlin

Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from python, node, swift, and kotlin programming languages to quickly build your applications.