forked from phoenix-oss/llama-stack-mirror
* API Keys passed from Client instead of distro configuration * delete distribution registry * Rename the "package" word away * Introduce a "Router" layer for providers Some providers need to be factorized and considered as thin routing layers on top of other providers. Consider two examples: - The inference API should be a routing layer over inference providers, routed using the "model" key - The memory banks API is another instance where various memory bank types will be provided by independent providers (e.g., a vector store is served by Chroma while a keyvalue memory can be served by Redis or PGVector) This commit introduces a generalized routing layer for this purpose. * update `apis_to_serve` * llama_toolchain -> llama_stack * Codemod from llama_toolchain -> llama_stack - added providers/registry - cleaned up api/ subdirectories and moved impls away - restructured api/api.py - from llama_stack.apis.<api> import foo should work now - update imports to do llama_stack.apis.<api> - update many other imports - added __init__, fixed some registry imports - updated registry imports - create_agentic_system -> create_agent - AgenticSystem -> Agent * Moved some stuff out of common/; re-generated OpenAPI spec * llama-toolchain -> llama-stack (hyphens) * add control plane API * add redis adapter + sqlite provider * move core -> distribution * Some more toolchain -> stack changes * small naming shenanigans * Removing custom tool and agent utilities and moving them client side * Move control plane to distribution server for now * Remove control plane from API list * no codeshield dependency randomly plzzzzz * Add "fire" as a dependency * add back event loggers * stack configure fixes * use brave instead of bing in the example client * add init file so it gets packaged * add init files so it gets packaged * Update MANIFEST * bug fix --------- Co-authored-by: Hardik Shah <hjshah@fb.com> Co-authored-by: Xi Yan <xiyan@meta.com> Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
63 lines
2.8 KiB
Markdown
63 lines
2.8 KiB
Markdown
# llama-stack
|
|
|
|
[](https://pypi.org/project/llama-stack/)
|
|
[](https://discord.gg/TZAAYNVtrU)
|
|
|
|
This repository contains the specifications and implementations of the APIs which are part of the Llama Stack.
|
|
|
|
The Llama Stack defines and standardizes the building blocks needed to bring generative AI applications to market. These blocks span the entire development lifecycle: from model training and fine-tuning, through product evaluation, to invoking AI agents in production. Beyond definition, we're developing open-source versions and partnering with cloud providers, ensuring developers can assemble AI solutions using consistent, interlocking pieces across platforms. The ultimate goal is to accelerate innovation in the AI space.
|
|
|
|
The Stack APIs are rapidly improving, but still very much work in progress and we invite feedback as well as direct contributions.
|
|
|
|
|
|
## APIs
|
|
|
|
The Llama Stack consists of the following set of APIs:
|
|
|
|
- Inference
|
|
- Safety
|
|
- Memory
|
|
- Agentic System
|
|
- Evaluation
|
|
- Post Training
|
|
- Synthetic Data Generation
|
|
- Reward Scoring
|
|
|
|
Each of the APIs themselves is a collection of REST endpoints.
|
|
|
|
|
|
## API Providers
|
|
|
|
A Provider is what makes the API real -- they provide the actual implementation backing the API.
|
|
|
|
As an example, for Inference, we could have the implementation be backed by open source libraries like `[ torch | vLLM | TensorRT ]` as possible options.
|
|
|
|
A provider can also be just a pointer to a remote REST service -- for example, cloud providers or dedicated inference providers could serve these APIs.
|
|
|
|
|
|
## Llama Stack Distribution
|
|
|
|
A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.
|
|
|
|
|
|
## Installation
|
|
|
|
You can install this repository as a [package](https://pypi.org/project/llama-stack/) with `pip install llama-stack`
|
|
|
|
If you want to install from source:
|
|
|
|
```bash
|
|
mkdir -p ~/local
|
|
cd ~/local
|
|
git clone git@github.com:meta-llama/llama-stack.git
|
|
|
|
conda create -n stack python=3.10
|
|
conda activate stack
|
|
|
|
cd llama-stack
|
|
pip install -e .
|
|
```
|
|
|
|
## The Llama CLI
|
|
|
|
The `llama` CLI makes it easy to work with the Llama Stack set of tools, including installing and running Distributions, downloading models, studying model prompt formats, etc. Please see the [CLI reference](docs/cli_reference.md) for details.
|