Composable building blocks to build Llama Apps https://llama-stack.readthedocs.io
Find a file
Ashwin Bharambe ec4fc800cc
[API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92)
This is yet another of those large PRs (hopefully we will have less and less of them as things mature fast). This one introduces substantial improvements and some simplifications to the stack.

Most important bits:

* Agents reference implementation now has support for session / turn persistence. The default implementation uses sqlite but there's also support for using Redis.

* We have re-architected the structure of the Stack APIs to allow for more flexible routing. The motivating use cases are:
  - routing model A to ollama and model B to a remote provider like Together
  - routing shield A to local impl while shield B to a remote provider like Bedrock
  - routing a vector memory bank to Weaviate while routing a keyvalue memory bank to Redis

* Support for provider specific parameters to be passed from the clients. A client can pass data using `x_llamastack_provider_data` parameter which can be type-checked and provided to the Adapter implementations.
2024-09-23 14:22:22 -07:00
.github add CODEOWNERS file 2024-09-11 11:40:37 -07:00
docs [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00
llama_stack [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00
rfcs moving rfc->docs 2024-09-18 16:54:24 -07:00
tests [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00
.flake8 Add a test runner and 2 very simple tests for agents 2024-09-19 12:22:48 -07:00
.gitignore ignore config dir 2024-09-20 00:24:49 -07:00
.pre-commit-config.yaml Initial commit 2024-07-23 08:32:33 -07:00
CODE_OF_CONDUCT.md Initial commit 2024-07-23 08:32:33 -07:00
CONTRIBUTING.md Introduce Llama stack distributions (#22) 2024-08-08 13:38:41 -07:00
LICENSE Update LICENSE (#47) 2024-08-29 07:39:50 -07:00
MANIFEST.in update MANIFEST 2024-09-18 18:58:50 -07:00
pyproject.toml Initial commit 2024-07-23 08:32:33 -07:00
README.md API Updates (#73) 2024-09-17 19:51:35 -07:00
requirements.txt Bump version to 0.0.19 2024-09-18 15:48:38 -07:00
setup.py Bump version to 0.0.20 2024-09-18 19:06:07 -07:00

llama-stack

PyPI - Downloads Discord

This repository contains the specifications and implementations of the APIs which are part of the Llama Stack.

The Llama Stack defines and standardizes the building blocks needed to bring generative AI applications to market. These blocks span the entire development lifecycle: from model training and fine-tuning, through product evaluation, to invoking AI agents in production. Beyond definition, we're developing open-source versions and partnering with cloud providers, ensuring developers can assemble AI solutions using consistent, interlocking pieces across platforms. The ultimate goal is to accelerate innovation in the AI space.

The Stack APIs are rapidly improving, but still very much work in progress and we invite feedback as well as direct contributions.

APIs

The Llama Stack consists of the following set of APIs:

  • Inference
  • Safety
  • Memory
  • Agentic System
  • Evaluation
  • Post Training
  • Synthetic Data Generation
  • Reward Scoring

Each of the APIs themselves is a collection of REST endpoints.

API Providers

A Provider is what makes the API real -- they provide the actual implementation backing the API.

As an example, for Inference, we could have the implementation be backed by open source libraries like [ torch | vLLM | TensorRT ] as possible options.

A provider can also be just a pointer to a remote REST service -- for example, cloud providers or dedicated inference providers could serve these APIs.

Llama Stack Distribution

A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.

Installation

You can install this repository as a package with pip install llama-stack

If you want to install from source:

mkdir -p ~/local
cd ~/local
git clone git@github.com:meta-llama/llama-stack.git

conda create -n stack python=3.10
conda activate stack

cd llama-stack
pip install -e .

The Llama CLI

The llama CLI makes it easy to work with the Llama Stack set of tools, including installing and running Distributions, downloading models, studying model prompt formats, etc. Please see the CLI reference for details.