forked from phoenix-oss/llama-stack-mirror
Much more documentation work, things are getting a bit consumable right now
This commit is contained in:
parent
98e213e96c
commit
900b0556e7
17 changed files with 143 additions and 162 deletions
9
docs/_static/css/my_theme.css
vendored
9
docs/_static/css/my_theme.css
vendored
|
@ -4,6 +4,11 @@
|
|||
max-width: 90%;
|
||||
}
|
||||
|
||||
.wy-side-nav-search, .wy-nav-top {
|
||||
background: #666666;
|
||||
.wy-nav-side {
|
||||
/* background: linear-gradient(45deg, #2980B9, #16A085); */
|
||||
background: linear-gradient(90deg, #332735, #1b263c);
|
||||
}
|
||||
|
||||
.wy-side-nav-search {
|
||||
background-color: transparent !important;
|
||||
}
|
||||
|
|
|
@ -1,14 +0,0 @@
|
|||
# API Providers
|
||||
|
||||
A Provider is what makes the API real -- they provide the actual implementation backing the API.
|
||||
|
||||
As an example, for Inference, we could have the implementation be backed by open source libraries like `[ torch | vLLM | TensorRT ]` as possible options.
|
||||
|
||||
A provider can also be just a pointer to a remote REST service -- for example, cloud providers or dedicated inference providers could serve these APIs.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
new_api_provider
|
||||
memory_api
|
||||
```
|
64
docs/source/concepts/index.md
Normal file
64
docs/source/concepts/index.md
Normal file
|
@ -0,0 +1,64 @@
|
|||
# Core Concepts
|
||||
|
||||
Given Llama Stack's service-oriented philosophy, a few concepts and workflows arise which may not feel completely natural in the LLM landscape, especially if you are coming with a background in other frameworks.
|
||||
|
||||
|
||||
## APIs
|
||||
|
||||
A Llama Stack API is described as a collection of REST endpoints. We currently support the following APIs:
|
||||
|
||||
- **Inference**: run inference with a LLM
|
||||
- **Safety**: apply safety policies to the output at a Systems (not only model) level
|
||||
- **Agents**: run multi-step agentic workflows with LLMs with tool usage, memory (RAG), etc.
|
||||
- **Memory**: store and retrieve data for RAG, chat history, etc.
|
||||
- **DatasetIO**: interface with datasets and data loaders
|
||||
- **Scoring**: evaluate outputs of the system
|
||||
- **Eval**: generate outputs (via Inference or Agents) and perform scoring
|
||||
- **Telemetry**: collect telemetry data from the system
|
||||
|
||||
We are working on adding a few more APIs to complete the application lifecycle. These will include:
|
||||
- **Batch Inference**: run inference on a dataset of inputs
|
||||
- **Batch Agents**: run agents on a dataset of inputs
|
||||
- **Post Training**: fine-tune a Llama model
|
||||
- **Synthetic Data Generation**: generate synthetic data for model development
|
||||
|
||||
## API Providers
|
||||
|
||||
The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Obvious examples for these include
|
||||
- LLM inference providers (e.g., Fireworks, Together, AWS Bedrock, etc.),
|
||||
- Vector databases (e.g., ChromaDB, Weaviate, Qdrant, etc.),
|
||||
- Safety providers (e.g., Meta's Llama Guard, AWS Bedrock Guardrails, etc.)
|
||||
|
||||
Providers come in two flavors:
|
||||
- **Remote**: the provider runs as a separate service external to the Llama Stack codebase. Llama Stack contains a small amount of adapter code.
|
||||
- **Inline**: the provider is fully specified and implemented within the Llama Stack codebase. It may be a simple wrapper around an existing library, or a full fledged implementation within Llama Stack.
|
||||
|
||||
## Resources
|
||||
|
||||
Some of these APIs are associated with a set of **Resources**. Here is the mapping of APIs to resources:
|
||||
|
||||
- **Inference**, **Eval** and **Post Training** are associated with `Model` resources.
|
||||
- **Safety** is associated with `Shield` resources.
|
||||
- **Memory** is associated with `Memory Bank` resources.
|
||||
- **DatasetIO** is associated with `Dataset` resources.
|
||||
- **Scoring** is associated with `ScoringFunction` resources.
|
||||
- **Eval** is associated with `Model` and `EvalTask` resources.
|
||||
|
||||
Furthermore, we allow these resources to be **federated** across multiple providers. For example, you may have some Llama models served by Fireworks while others are served by AWS Bedrock. Regardless, they will all work seamlessly with the same uniform Inference API provided by Llama Stack.
|
||||
|
||||
```{admonition} Registering Resources
|
||||
:class: tip
|
||||
|
||||
Given this architecture, it is necessary for the Stack to know which provider to use for a given resource. This means you need to explicitly _register_ resources (including models) before you can use them with the associated APIs.
|
||||
```
|
||||
|
||||
## Distributions
|
||||
|
||||
While there is a lot of flexibility to mix-and-match providers, often users will work with a specific set of providers (hardware support, contractual obligations, etc.) We therefore need to provide a _convenient shorthand_ for such collections. We call this shorthand a **Llama Stack Distribution** or a **Distro**. One can think of it as specific pre-packaged versions of the Llama Stack. Here are some examples:
|
||||
|
||||
**Remotely Hosted Distro**: These are the simplest to consume from a user perspective. You can simply obtain the API key for these providers, point to a URL and have _all_ Llama Stack APIs working out of the box. Currently, [Fireworks](https://fireworks.ai/) and [Together](https://together.xyz/) provide such easy-to-consume Llama Stack distributions.
|
||||
|
||||
**Locally Hosted Distro**: You may want to run Llama Stack on your own hardware. Typically though, you still need to use Inference via an external service. You can use providers like HuggingFace TGI, Cerebras, Fireworks, Together, etc. for this purpose. Or you may have access to GPUs and can run a [vLLM](https://github.com/vllm-project/vllm) instance. If you "just" have a regular desktop machine, you can use [Ollama](https://ollama.com/) for inference. To provide convenient quick access to these options, we provide a number of such pre-configured locally-hosted Distros.
|
||||
|
||||
|
||||
**On-device Distro**: Finally, you may want to run Llama Stack directly on an edge device (mobile phone or a tablet.) We provide Distros for iOS and Android (coming soon.)
|
|
@ -80,6 +80,5 @@ html_theme_options = {
|
|||
}
|
||||
|
||||
html_static_path = ["../_static"]
|
||||
html_logo = "../_static/llama-stack-logo.png"
|
||||
|
||||
# html_logo = "../_static/llama-stack-logo.png"
|
||||
html_style = "../_static/css/my_theme.css"
|
||||
|
|
9
docs/source/contributing/index.md
Normal file
9
docs/source/contributing/index.md
Normal file
|
@ -0,0 +1,9 @@
|
|||
# Contributing to Llama Stack
|
||||
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
new_api_provider
|
||||
memory_api
|
||||
```
|
|
@ -1,57 +1,58 @@
|
|||
# Building Llama Stacks
|
||||
# Starting a Llama Stack
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
:hidden:
|
||||
As mentioned in the [Concepts](../concepts/index), Llama Stack Distributions are specific pre-packaged versions of the Llama Stack. These templates make it easy to get started quickly.
|
||||
|
||||
self_hosted_distro/index
|
||||
remote_hosted_distro/index
|
||||
ondevice_distro/index
|
||||
```
|
||||
## Introduction
|
||||
|
||||
Llama Stack Distributions are pre-built Docker containers/Conda environments that assemble APIs and Providers to provide a consistent whole to the end application developer.
|
||||
|
||||
These distributions allow you to mix-and-match providers - some could be backed by local code and some could be remote. This flexibility enables you to choose the optimal setup for your use case, such as serving a small model locally while using a cloud provider for larger models, all while maintaining a consistent API interface for your application.
|
||||
|
||||
|
||||
## Decide Your Build Type
|
||||
There are two ways to start a Llama Stack:
|
||||
|
||||
- **Docker**: we provide a number of pre-built Docker containers allowing you to get started instantly. If you are focused on application development, we recommend this option.
|
||||
A Llama Stack Distribution can be consumed in two ways:
|
||||
- **Docker**: we provide a number of pre-built Docker containers allowing you to get started instantly. If you are focused on application development, we recommend this option. You can also build your own custom Docker container.
|
||||
- **Conda**: the `llama` CLI provides a simple set of commands to build, configure and run a Llama Stack server containing the exact combination of providers you wish. We have provided various templates to make getting started easier.
|
||||
|
||||
Both of these provide options to run model inference using our reference implementations, Ollama, TGI, vLLM or even remote providers like Fireworks, Together, Bedrock, etc.
|
||||
|
||||
### Decide Your Inference Provider
|
||||
|
||||
Running inference on the underlying Llama model is one of the most critical requirements. Depending on what hardware you have available, you have various options. Note that each option have different necessary prerequisites.
|
||||
Which distribution to choose depends on the hardware you have for running LLM inference.
|
||||
|
||||
- **Do you have access to a machine with powerful GPUs?**
|
||||
If so, we suggest:
|
||||
- [distribution-meta-reference-gpu](./self_hosted_distro/meta-reference-gpu.md)
|
||||
- [distribution-tgi](./self_hosted_distro/tgi.md)
|
||||
- [distribution-remote-vllm](self_hosted_distro/remote-vllm)
|
||||
- [distribution-meta-reference-gpu](self_hosted_distro/meta-reference-gpu)
|
||||
- [distribution-tgi](self_hosted_distro/tgi)
|
||||
|
||||
- **Are you running on a "regular" desktop machine?**
|
||||
If so, we suggest:
|
||||
- [distribution-ollama](./self_hosted_distro/ollama.md)
|
||||
- [distribution-ollama](self_hosted_distro/ollama)
|
||||
|
||||
- **Do you have an API key for a remote inference provider like Fireworks, Together, etc.?** If so, we suggest:
|
||||
- [distribution-together](./remote_hosted_distro/together.md)
|
||||
- [distribution-fireworks](./remote_hosted_distro/fireworks.md)
|
||||
- [distribution-together](#remote-hosted-distributions)
|
||||
- [distribution-fireworks](#remote-hosted-distributions)
|
||||
|
||||
- **Do you want to run Llama Stack inference on your iOS / Android device** If so, we suggest:
|
||||
- [iOS](./ondevice_distro/ios_sdk.md)
|
||||
- [Android](https://github.com/meta-llama/llama-stack-client-kotlin) (coming soon)
|
||||
- [iOS](ondevice_distro/ios_sdk)
|
||||
- [Android](ondevice_distro/android_sdk) (coming soon)
|
||||
|
||||
Please see our pages in detail for the types of distributions we offer:
|
||||
|
||||
1. [Self-Hosted Distributions](./self_hosted_distro/index.md): If you want to run Llama Stack inference on your local machine.
|
||||
2. [Remote-Hosted Distributions](./remote_hosted_distro/index.md): If you want to connect to a remote hosted inference provider.
|
||||
3. [On-device Distributions](./ondevice_distro/index.md): If you want to run Llama Stack inference on your iOS / Android device.
|
||||
## Remote-Hosted Distributions
|
||||
|
||||
Remote-Hosted distributions are available endpoints serving Llama Stack API that you can directly connect to.
|
||||
|
||||
| Distribution | Endpoint | Inference | Agents | Memory | Safety | Telemetry |
|
||||
|-------------|----------|-----------|---------|---------|---------|------------|
|
||||
| Together | [https://llama-stack.together.ai](https://llama-stack.together.ai) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Fireworks | [https://llamastack-preview.fireworks.ai](https://llamastack-preview.fireworks.ai) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
|
||||
You can use `llama-stack-client` to interact with these endpoints. For example, to list the available models served by the Fireworks endpoint:
|
||||
|
||||
```bash
|
||||
$ pip install llama-stack-client
|
||||
$ llama-stack-client configure --endpoint https://llamastack-preview.fireworks.ai
|
||||
$ llama-stack-client models list
|
||||
```
|
||||
|
||||
## On-Device Distributions
|
||||
|
||||
On-device distributions are Llama Stack distributions that run locally on your iOS / Android device.
|
||||
|
||||
|
||||
## Building Your Own Distribution
|
||||
|
||||
<TODO> talk about llama stack build --image-type conda, etc.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
```bash
|
||||
|
@ -59,81 +60,15 @@ $ git clone git@github.com:meta-llama/llama-stack.git
|
|||
```
|
||||
|
||||
|
||||
### Starting the Distribution
|
||||
|
||||
::::{tab-set}
|
||||
|
||||
:::{tab-item} meta-reference-gpu
|
||||
##### System Requirements
|
||||
Access to Single-Node GPU to start a local server.
|
||||
|
||||
##### Downloading Models
|
||||
Please make sure you have Llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](../cli_reference/download_models.md) here to download the models.
|
||||
|
||||
```
|
||||
$ ls ~/.llama/checkpoints
|
||||
Llama3.1-8B Llama3.2-11B-Vision-Instruct Llama3.2-1B-Instruct Llama3.2-90B-Vision-Instruct Llama-Guard-3-8B
|
||||
Llama3.1-8B-Instruct Llama3.2-1B Llama3.2-3B-Instruct Llama-Guard-3-1B Prompt-Guard-86M
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
:::{tab-item} vLLM
|
||||
##### System Requirements
|
||||
Access to Single-Node GPU to start a vLLM server.
|
||||
:::
|
||||
|
||||
:::{tab-item} tgi
|
||||
##### System Requirements
|
||||
Access to Single-Node GPU to start a TGI server.
|
||||
:::
|
||||
|
||||
:::{tab-item} ollama
|
||||
##### System Requirements
|
||||
Access to Single-Node CPU/GPU able to run ollama.
|
||||
:::
|
||||
|
||||
:::{tab-item} together
|
||||
##### System Requirements
|
||||
Access to Single-Node CPU with Together hosted endpoint via API_KEY from [together.ai](https://api.together.xyz/signin).
|
||||
:::
|
||||
|
||||
:::{tab-item} fireworks
|
||||
##### System Requirements
|
||||
Access to Single-Node CPU with Fireworks hosted endpoint via API_KEY from [fireworks.ai](https://fireworks.ai/).
|
||||
:::
|
||||
|
||||
::::
|
||||
|
||||
|
||||
::::{tab-set}
|
||||
:::{tab-item} meta-reference-gpu
|
||||
- [Start Meta Reference GPU Distribution](./self_hosted_distro/meta-reference-gpu.md)
|
||||
:::
|
||||
|
||||
:::{tab-item} vLLM
|
||||
- [Start vLLM Distribution](./self_hosted_distro/remote-vllm.md)
|
||||
:::
|
||||
|
||||
:::{tab-item} tgi
|
||||
- [Start TGI Distribution](./self_hosted_distro/tgi.md)
|
||||
:::
|
||||
|
||||
:::{tab-item} ollama
|
||||
- [Start Ollama Distribution](./self_hosted_distro/ollama.md)
|
||||
:::
|
||||
|
||||
:::{tab-item} together
|
||||
- [Start Together Distribution](./self_hosted_distro/together.md)
|
||||
:::
|
||||
|
||||
:::{tab-item} fireworks
|
||||
- [Start Fireworks Distribution](./self_hosted_distro/fireworks.md)
|
||||
:::
|
||||
|
||||
::::
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
- If you encounter any issues, search through our [GitHub Issues](https://github.com/meta-llama/llama-stack/issues), or file an new issue.
|
||||
- Use `--port <PORT>` flag to use a different port number. For docker run, update the `-p <PORT>:<PORT>` flag.
|
||||
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 3
|
||||
|
||||
remote_hosted_distro/index
|
||||
ondevice_distro/index
|
||||
```
|
||||
|
|
|
@ -1,6 +1,3 @@
|
|||
# On-Device Distributions
|
||||
|
||||
On-device distributions are Llama Stack distributions that run locally on your iOS / Android device.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
|
|
@ -1,12 +1,5 @@
|
|||
# Remote-Hosted Distributions
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
:hidden:
|
||||
|
||||
remote
|
||||
```
|
||||
|
||||
Remote-Hosted distributions are available endpoints serving Llama Stack API that you can directly connect to.
|
||||
|
||||
| Distribution | Endpoint | Inference | Agents | Memory | Safety | Telemetry |
|
||||
|
|
|
@ -1,20 +1,5 @@
|
|||
# Self-Hosted Distributions
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
:hidden:
|
||||
|
||||
meta-reference-gpu
|
||||
meta-reference-quantized-gpu
|
||||
ollama
|
||||
tgi
|
||||
dell-tgi
|
||||
together
|
||||
fireworks
|
||||
remote-vllm
|
||||
bedrock
|
||||
```
|
||||
|
||||
We offer deployable distributions where you can host your own Llama Stack server using local inference.
|
||||
|
||||
| **Distribution** | **Llama Stack Docker** | Start This Distribution |
|
||||
|
|
|
@ -149,6 +149,7 @@ if __name__ == "__main__":
|
|||
|
||||
## Next Steps
|
||||
|
||||
You can mix and match different providers for inference, memory, agents, evals etc. See [Building Llama Stacks](../distributions/index.md)
|
||||
|
||||
For example applications and more detailed tutorials, visit our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repository.
|
||||
- Learn more about Llama Stack [Concepts](../concepts/index.md)
|
||||
- Learn how to [Build Llama Stacks](../distributions/index.md)
|
||||
- See [References](../references/index.md) for more details about the llama CLI and Python SDK
|
||||
- For example applications and more detailed tutorials, visit our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repository.
|
||||
|
|
|
@ -54,7 +54,7 @@ Additionally, we have designed every element of the Stack such that APIs as well
|
|||
|
||||
## Supported Llama Stack Implementations
|
||||
|
||||
Llama Stack already has a number of "adapters" available for some popular Inference and Memory (Vector Store) providers. For other APIs (particularly Safety and Agents), we provide reference implementations you can use to get started. We expect this list to grow over time. We are slowly onboarding more providers to the ecosystem as we get more confidence in the APIs.
|
||||
Llama Stack already has a number of "adapters" available for some popular Inference and Memory (Vector Store) providers. For other APIs (particularly Safety and Agents), we provide *reference implementations* you can use to get started. We expect this list to grow over time. We are slowly onboarding more providers to the ecosystem as we get more confidence in the APIs.
|
||||
|
||||
| **API Provider** | **Environments** | **Agents** | **Inference** | **Memory** | **Safety** | **Telemetry** |
|
||||
| :----: | :----: | :----: | :----: | :----: | :----: | :----: |
|
||||
|
@ -71,10 +71,12 @@ Llama Stack already has a number of "adapters" available for some popular Infere
|
|||
## Dive In
|
||||
|
||||
- Look at [Quick Start](getting_started/index) section to get started with Llama Stack.
|
||||
- Learn more about Llama Stack Concepts to understand how different components fit together.
|
||||
- Learn more about [Llama Stack Concepts](concepts/index) to understand how different components fit together.
|
||||
- Check out [Zero to Hero](zero_to_hero_guide) guide to learn in details about how to build your first agent.
|
||||
- See how you can use [Llama Stack Distributions](distributions/index) to get started with popular inference and other service providers.
|
||||
|
||||
Kutta
|
||||
|
||||
We also provide a number of Client side SDKs to make it easier to connect to Llama Stack server in your preferred language.
|
||||
|
||||
| **Language** | **Client SDK** | **Package** |
|
||||
|
@ -86,16 +88,13 @@ We also provide a number of Client side SDKs to make it easier to connect to Lla
|
|||
|
||||
You can find more example scripts with client SDKs to talk with the Llama Stack server in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repo.
|
||||
|
||||
|
||||
```{toctree}
|
||||
:hidden:
|
||||
:maxdepth: 3
|
||||
|
||||
getting_started/index
|
||||
concepts/index
|
||||
distributions/index
|
||||
llama_cli_reference/index
|
||||
llama_cli_reference/download_models
|
||||
llama_stack_client_cli_reference/index
|
||||
api_providers/index
|
||||
contributing/index
|
||||
distribution_dev/index
|
||||
```
|
||||
|
|
8
docs/source/references/index.md
Normal file
8
docs/source/references/index.md
Normal file
|
@ -0,0 +1,8 @@
|
|||
```{toctree}
|
||||
:maxdepth: 2
|
||||
|
||||
```
|
||||
|
||||
# llama_cli_reference/index
|
||||
# llama_cli_reference/download_models
|
||||
# llama_stack_client_cli_reference/index
|
Loading…
Add table
Add a link
Reference in a new issue