mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-29 15:23:51 +00:00
Update README, add newline between API surface configurations
This commit is contained in:
parent
66412b932b
commit
68654460f8
2 changed files with 37 additions and 3 deletions
39
README.md
39
README.md
|
@ -3,9 +3,42 @@
|
||||||
[](https://pypi.org/project/llama-toolchain/)
|
[](https://pypi.org/project/llama-toolchain/)
|
||||||
[](https://discord.gg/TZAAYNVtrU)
|
[](https://discord.gg/TZAAYNVtrU)
|
||||||
|
|
||||||
This repo contains the API specifications for various components of the Llama Stack as well implementations for some of those APIs like model inference.
|
This repository contains the API specifications and implementations for various components of the Llama Stack.
|
||||||
|
|
||||||
|
The [Llama Stack](https://github.com/meta-llama/llama-toolchain/pull/8) defines and standardizes the building blocks needed to bring generative AI applications to market. These blocks span the entire development lifecycle: from model training and fine-tuning, through product evaluation, to invoking AI agents in production. Beyond definition, we're developing open-source versions and partnering with cloud providers, ensuring developers can assemble AI solutions using consistent, interlocking pieces across platforms. The ultimate goal is to accelerate innovation in the AI space.
|
||||||
|
|
||||||
|
The Stack APIs are rapidly improving, but still very much Works in Progress and we invite feedback as well as direct contributions.
|
||||||
|
|
||||||
|
|
||||||
|
## APIs
|
||||||
|
|
||||||
|
The Llama Stack consists of the following set of APIs:
|
||||||
|
|
||||||
|
- Inference
|
||||||
|
- Safety
|
||||||
|
- Memory
|
||||||
|
- Agentic System
|
||||||
|
- Evaluation
|
||||||
|
- Post Training
|
||||||
|
- Synthetic Data Generation
|
||||||
|
- Reward Scoring
|
||||||
|
|
||||||
|
Each of the APIs themselves is a collection of REST endpoints.
|
||||||
|
|
||||||
|
|
||||||
|
## API Providers
|
||||||
|
|
||||||
|
A Provider is what makes the API real -- they provide the actual implementation backing the API.
|
||||||
|
|
||||||
|
As an example, for Inference, we could have the implementation be backed by primitives from `[ torch | vLLM | TensorRT ]` as possible options.
|
||||||
|
|
||||||
|
A provider can also be just a pointer to a remote REST service -- for example, cloud providers like `[ aws | gcp ]` could possibly serve these APIs.
|
||||||
|
|
||||||
|
|
||||||
|
## Llama Stack Distribution
|
||||||
|
|
||||||
|
A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by inline code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.
|
||||||
|
|
||||||
The Llama Stack consists of toolchain-apis and agentic-apis. This repo contains the toolchain-apis.
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
|
@ -27,4 +60,4 @@ pip install -e .
|
||||||
|
|
||||||
## The Llama CLI
|
## The Llama CLI
|
||||||
|
|
||||||
The `llama` CLI makes it easy to configure and run the Llama toolchain. Read the [CLI reference](docs/cli_reference.md) for details.
|
The `llama` CLI makes it easy to work with the Llama Stack set of tools, including installing and running Distributions, downloading models, studying model prompt formats, etc. Please see the [CLI reference](docs/cli_reference.md) for details.
|
||||||
|
|
|
@ -91,6 +91,7 @@ def configure_llama_distribution(dist: "Distribution", config: "DistributionConf
|
||||||
else None
|
else None
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
print("")
|
||||||
|
|
||||||
config.providers[api.value] = {
|
config.providers[api.value] = {
|
||||||
"provider_id": provider_spec.provider_id,
|
"provider_id": provider_spec.provider_id,
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue