forked from phoenix-oss/llama-stack-mirror
docs: Fix url to the llama-stack-spec yaml/html files (#1081)
# What does this PR do? Fixes urls in the rfc doc (RFC-0001-llama-stack.md) Also fixes minor markdown linting issues Signed-off-by: Anil Vishnoi <vishnoianil@gmail.com>
This commit is contained in:
parent
efdd60014d
commit
aebd130b08
1 changed files with 13 additions and 17 deletions
|
@ -1,12 +1,15 @@
|
|||
# The Llama Stack API
|
||||
|
||||
**Authors:**
|
||||
|
||||
* Meta: @raghotham, @ashwinb, @hjshah, @jspisak
|
||||
|
||||
## Summary
|
||||
|
||||
As part of the Llama 3.1 release, Meta is releasing an RFC for ‘Llama Stack’, a comprehensive set of interfaces / API for ML developers building on top of Llama foundation models. We are looking for feedback on where the API can be improved, any corner cases we may have missed and your general thoughts on how useful this will be. Ultimately, our hope is to create a standard for working with Llama models in order to simplify the developer experience and foster innovation across the Llama ecosystem.
|
||||
|
||||
## Motivation
|
||||
|
||||
Llama models were always intended to work as part of an overall system that can orchestrate several components, including calling external tools. Our vision is to go beyond the foundation models and give developers access to a broader system that gives them the flexibility to design and create custom offerings that align with their vision. This thinking started last year when we first introduced a system-level safety model. Meta has continued to release new components for orchestration at the system level and, most recently in Llama 3.1, we’ve introduced the Llama Guard 3 safety model that is multilingual, a prompt injection filter, Prompt Guard and refreshed v3 of our CyberSec Evals. We are also releasing a reference implementation of an agentic system to demonstrate how all the pieces fit together.
|
||||
|
||||
While building the reference implementation, we realized that having a clean and consistent way to interface between components could be valuable not only for us but for anyone leveraging Llama models and other components as part of their system. We’ve also heard from the community as they face a similar challenge as components exist with overlapping functionality and there are incompatible interfaces and yet don't cover the end-to-end model life cycle.
|
||||
|
@ -16,22 +19,21 @@ With these motivations, we engaged folks in industry, startups, and the broader
|
|||
We welcome feedback and ways to improve the proposal. We’re excited to grow the ecosystem around Llama and lower barriers for both developers and platform providers.
|
||||
|
||||
## Design decisions
|
||||
Meta releases weights of both the pretrained and instruction fine-tuned Llama models to support several use cases. These weights can be improved - fine tuned and aligned - with curated datasets to then be deployed for inference to support specific applications. The curated datasets can be produced manually by humans or synthetically by other models or by leveraging human feedback by collecting usage data of the application itself. This results in a continuous improvement cycle where the model gets better over time. This is the model life cycle.
|
||||
|
||||
Meta releases weights of both the pretrained and instruction fine-tuned Llama models to support several use cases. These weights can be improved - fine tuned and aligned - with curated datasets to then be deployed for inference to support specific applications. The curated datasets can be produced manually by humans or synthetically by other models or by leveraging human feedback by collecting usage data of the application itself. This results in a continuous improvement cycle where the model gets better over time. This is the model life cycle.
|
||||
|
||||
### Model Lifecycle
|
||||
|
||||

|
||||
|
||||
|
||||
For each of the operations that need to be performed (e.g. fine tuning, inference, evals etc) during the model life cycle, we identified the capabilities as toolchain APIs that are needed. Some of these capabilities are primitive operations like inference while other capabilities like synthetic data generation are composed of other capabilities. The list of APIs we have identified to support the lifecycle of Llama models is below:
|
||||
|
||||
- /datasets - to support creating training and evaluation data sets
|
||||
- /post_training - to support creating and managing supervised finetuning (SFT) or preference optimization jobs
|
||||
- /evaluations - to support creating and managing evaluations for capabilities like question answering, summarization, or text - generation
|
||||
- /synthetic_data_generation - to support generating synthetic data using data generation model and a reward model
|
||||
- /reward_scoring - to support synthetic data generation
|
||||
- /inference - to support serving the models for applications
|
||||
* /datasets - to support creating training and evaluation data sets
|
||||
* /post_training - to support creating and managing supervised finetuning (SFT) or preference optimization jobs
|
||||
* /evaluations - to support creating and managing evaluations for capabilities like question answering, summarization, or text - generation
|
||||
* /synthetic_data_generation - to support generating synthetic data using data generation model and a reward model
|
||||
* /reward_scoring - to support synthetic data generation
|
||||
* /inference - to support serving the models for applications
|
||||
|
||||
### Agentic System
|
||||
|
||||
|
@ -41,6 +43,7 @@ In addition to the model lifecycle, we considered the different components invol
|
|||
Note that as of today, in the OSS world, such a “loop” is often coded explicitly via elaborate prompt engineering using a ReAct pattern (typically) or preconstructed execution graph. Llama 3.1 (and future Llamas) attempts to absorb this multi-step reasoning loop inside the main model itself.
|
||||
|
||||
**Let's consider an example:**
|
||||
|
||||
1. The user asks the system "Who played the NBA finals last year?"
|
||||
1. The model "understands" that this question needs to be answered using web search. It answers this abstractly with a message of the form "Please call the search tool for me with the query: 'List finalist teams for NBA in the last year' ". Note that the model by itself does not call the tool (of course!)
|
||||
1. The executor consults the set of tool implementations which have been configured by the developer to find an implementation for the "search tool". If it does not find it, it returns an error to the model. Otherwise, it executes this tool and returns the result of this tool back to the model.
|
||||
|
@ -62,14 +65,7 @@ We define the Llama Stack as a layer cake shown below.
|
|||
|
||||

|
||||
|
||||
|
||||
|
||||
|
||||
The API is defined in the [YAML](../docs/resources/llama-stack-spec.yaml) and [HTML](../docs/resources/llama-stack-spec.html) files. These files were generated using the Pydantic definitions in (api/datatypes.py and api/endpoints.py) files that are in the llama-models, llama-stack, and llama-agentic-system repositories.
|
||||
|
||||
|
||||
|
||||
|
||||
The API is defined in the [YAML](../docs/_static/llama-stack-spec.yaml) and [HTML](../docs/_static/llama-stack-spec.html) files. These files were generated using the Pydantic definitions in (api/datatypes.py and api/endpoints.py) files that are in the llama-models, llama-stack, and llama-agentic-system repositories.
|
||||
|
||||
## Sample implementations
|
||||
|
||||
|
@ -77,8 +73,8 @@ To prove out the API, we implemented a handful of use cases to make things more
|
|||
|
||||
There is also a sample inference endpoint implementation in the [llama-stack](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/distribution/server/server.py) repository.
|
||||
|
||||
|
||||
## Limitations
|
||||
|
||||
The reference implementation for Llama Stack APIs to date only includes sample implementations using the inference API. We are planning to flesh out the design of Llama Stack Distributions (distros) by combining capabilities from different providers into a single vertically integrated stack. We plan to implement other APIs and, of course, we’d love contributions!!
|
||||
|
||||
Thank you in advance for your feedback, support and contributions to make this a better API.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue