From 68654460f804c426acbe1996a4a0ec990c30f638 Mon Sep 17 00:00:00 2001 From: Ashwin Bharambe Date: Wed, 7 Aug 2024 15:14:59 -0700 Subject: [PATCH] Update README, add newline between API surface configurations --- README.md | 39 +++++++++++++++++-- llama_toolchain/cli/distribution/configure.py | 1 + 2 files changed, 37 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 1cd0e58d4..996512b2f 100644 --- a/README.md +++ b/README.md @@ -3,9 +3,42 @@ [![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-toolchain)](https://pypi.org/project/llama-toolchain/) [![Discord](https://img.shields.io/discord/1257833999603335178)](https://discord.gg/TZAAYNVtrU) -This repo contains the API specifications for various components of the Llama Stack as well implementations for some of those APIs like model inference. +This repository contains the API specifications and implementations for various components of the Llama Stack. + +The [Llama Stack](https://github.com/meta-llama/llama-toolchain/pull/8) defines and standardizes the building blocks needed to bring generative AI applications to market. These blocks span the entire development lifecycle: from model training and fine-tuning, through product evaluation, to invoking AI agents in production. Beyond definition, we're developing open-source versions and partnering with cloud providers, ensuring developers can assemble AI solutions using consistent, interlocking pieces across platforms. The ultimate goal is to accelerate innovation in the AI space. + +The Stack APIs are rapidly improving, but still very much Works in Progress and we invite feedback as well as direct contributions. + + +## APIs + +The Llama Stack consists of the following set of APIs: + +- Inference +- Safety +- Memory +- Agentic System +- Evaluation +- Post Training +- Synthetic Data Generation +- Reward Scoring + +Each of the APIs themselves is a collection of REST endpoints. + + +## API Providers + +A Provider is what makes the API real -- they provide the actual implementation backing the API. + +As an example, for Inference, we could have the implementation be backed by primitives from `[ torch | vLLM | TensorRT ]` as possible options. + +A provider can also be just a pointer to a remote REST service -- for example, cloud providers like `[ aws | gcp ]` could possibly serve these APIs. + + +## Llama Stack Distribution + +A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by inline code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications. -The Llama Stack consists of toolchain-apis and agentic-apis. This repo contains the toolchain-apis. ## Installation @@ -27,4 +60,4 @@ pip install -e . ## The Llama CLI -The `llama` CLI makes it easy to configure and run the Llama toolchain. Read the [CLI reference](docs/cli_reference.md) for details. +The `llama` CLI makes it easy to work with the Llama Stack set of tools, including installing and running Distributions, downloading models, studying model prompt formats, etc. Please see the [CLI reference](docs/cli_reference.md) for details. diff --git a/llama_toolchain/cli/distribution/configure.py b/llama_toolchain/cli/distribution/configure.py index e90c875c5..8f405bbf7 100644 --- a/llama_toolchain/cli/distribution/configure.py +++ b/llama_toolchain/cli/distribution/configure.py @@ -91,6 +91,7 @@ def configure_llama_distribution(dist: "Distribution", config: "DistributionConf else None ), ) + print("") config.providers[api.value] = { "provider_id": provider_spec.provider_id,