added more docs

This commit is contained in:
Raghotham Murthy 2024-07-11 03:11:45 -07:00
parent e657e71446
commit 8631d90f1e
3 changed files with 35 additions and 35 deletions

View file

@ -559,21 +559,20 @@ if __name__ == "__main__":
as reward models. There are specific fine tuning and quantization techniques that we have found as reward models. There are specific fine tuning and quantization techniques that we have found
result in the best performing Llama models. We would like to share ways in which an LLM Ops result in the best performing Llama models. We would like to share ways in which an LLM Ops
toolchain can be designed by leveraging our learnings in getting Llama models to power Metas products. toolchain can be designed by leveraging our learnings in getting Llama models to power Metas products.
<br>
In addition, the Llama 3 models Meta will release in July should not just be seen as a model, but In addition, the Llama 3 models Meta will release in July should not just be seen as a model, but
really as a system starting the transition towards an entity capable of performing "agentic" tasks really as a system starting the transition towards an entity capable of performing "agentic" tasks
which require the ability to act as the central planner and break a task down and perform multi-step which require the ability to act as the central planner and break a task down and perform multi-step
reasoning and call tools for specific operations. In addition, there needs to be general model-level reasoning and call tools for specific operations. In addition, there needs to be general model-level
safety checks as well as task-specific safety checks that are performed at a system level. safety checks as well as task-specific safety checks that are performed at a system level.
<br>
We are defining the Llama Stack as a set of APIs and standards by synthesizing our learnings while We are defining the Llama Stack as a set of APIs and standards by synthesizing our learnings while
working with Llama models. The APIs are divided into the llama-toolchain-api and the llama-agentic-system-api. working with Llama models. The APIs are divided into the llama-toolchain-api and the llama-agentic-system-api.
These APIs provide a coherent way for model developers to fine tune and serve Llama models, and agentic app These APIs provide a coherent way for model developers to fine tune and serve Llama models, and agentic app
developers to leverage all the capabilities of the Llama models seamlessly. We would like to work with the developers to leverage all the capabilities of the Llama models seamlessly. We would like to work with the
ecosystem to enhance and simplify the API. In addition, we will be releasing a plug-in architecture to allow ecosystem to enhance and simplify the API. In addition, we will be releasing a plug-in architecture to allow
creating distributions of the llama stack with different implementations. creating distributions of the llama stack with different implementations.
<br>
This is the specification of the llama stack that provides This is the specification of the llama stack that provides
a set of endpoints and their corresponding interfaces that are tailored to a set of endpoints and their corresponding interfaces that are tailored to
best leverage Llama Models. The specification is still in draft and subject to change.""", best leverage Llama Models. The specification is still in draft and subject to change.""",

View file

@ -21,7 +21,7 @@
"info": { "info": {
"title": "[DRAFT] Llama Stack Specification", "title": "[DRAFT] Llama Stack Specification",
"version": "0.0.1", "version": "0.0.1",
"description": "Meta has built out a fairly sophisticated platform internally to post train, evaluate, and \n serve Llama models to support Metas products. Given the newer capabilities of the llama models, \n the model development and model serving capabilities of the platform need to be enhanced in \n specific ways in order to best leverage the models. For example, the inference platform needs \n to support code execution to take advantage of the built-in knowledge of tools of the model. \n The largest models are of high enough quality to be used to generate synthetic data or be used \n as reward models. There are specific fine tuning and quantization techniques that we have found \n result in the best performing Llama models. We would like to share ways in which an LLM Ops \n toolchain can be designed by leveraging our learnings in getting Llama models to power Metas products.\n\n In addition, the Llama 3 models Meta will release in July should not just be seen as a model, but \n really as a system starting the transition towards an entity capable of performing \"agentic\" tasks \n which require the ability to act as the central planner and break a task down and perform multi-step \n reasoning and call tools for specific operations. In addition, there needs to be general model-level \n safety checks as well as task-specific safety checks that are performed at a system level. \n\n We are defining the Llama Stack as a set of APIs and standards by synthesizing our learnings while \n working with Llama models. The APIs are divided into the llama-toolchain-api and the llama-agentic-system-api. \n These APIs provide a coherent way for model developers to fine tune and serve Llama models, and agentic app \n developers to leverage all the capabilities of the Llama models seamlessly. We would like to work with the \n ecosystem to enhance and simplify the API. In addition, we will be releasing a plug-in architecture to allow \n creating distributions of the llama stack with different implementations.\n\n\n This is the specification of the llama stack that provides \n a set of endpoints and their corresponding interfaces that are tailored to \n best leverage Llama Models. The specification is still in draft and subject to change." "description": "Meta has built out a fairly sophisticated platform internally to post train, evaluate, and \n serve Llama models to support Metas products. Given the newer capabilities of the llama models, \n the model development and model serving capabilities of the platform need to be enhanced in \n specific ways in order to best leverage the models. For example, the inference platform needs \n to support code execution to take advantage of the built-in knowledge of tools of the model. \n The largest models are of high enough quality to be used to generate synthetic data or be used \n as reward models. There are specific fine tuning and quantization techniques that we have found \n result in the best performing Llama models. We would like to share ways in which an LLM Ops \n toolchain can be designed by leveraging our learnings in getting Llama models to power Metas products.\n <br>\n In addition, the Llama 3 models Meta will release in July should not just be seen as a model, but \n really as a system starting the transition towards an entity capable of performing \"agentic\" tasks \n which require the ability to act as the central planner and break a task down and perform multi-step \n reasoning and call tools for specific operations. In addition, there needs to be general model-level \n safety checks as well as task-specific safety checks that are performed at a system level. \n <br>\n We are defining the Llama Stack as a set of APIs and standards by synthesizing our learnings while \n working with Llama models. The APIs are divided into the llama-toolchain-api and the llama-agentic-system-api. \n These APIs provide a coherent way for model developers to fine tune and serve Llama models, and agentic app \n developers to leverage all the capabilities of the Llama models seamlessly. We would like to work with the \n ecosystem to enhance and simplify the API. In addition, we will be releasing a plug-in architecture to allow \n creating distributions of the llama stack with different implementations.\n <br>\n This is the specification of the llama stack that provides \n a set of endpoints and their corresponding interfaces that are tailored to \n best leverage Llama Models. The specification is still in draft and subject to change."
}, },
"servers": [ "servers": [
{ {
@ -3332,25 +3332,25 @@
], ],
"tags": [ "tags": [
{ {
"name": "MemoryBanks" "name": "SyntheticDataGeneration"
}, },
{ {
"name": "Datasets" "name": "Inference"
},
{
"name": "MemoryBanks"
}, },
{ {
"name": "AgenticSystem" "name": "AgenticSystem"
}, },
{ {
"name": "SyntheticDataGeneration" "name": "Datasets"
},
{
"name": "PostTraining"
}, },
{ {
"name": "RewardScoring" "name": "RewardScoring"
}, },
{ {
"name": "Inference" "name": "PostTraining"
}, },
{ {
"name": "ShieldConfig", "name": "ShieldConfig",

View file

@ -1514,24 +1514,25 @@ info:
\ are specific fine tuning and quantization techniques that we have found \n \ \ are specific fine tuning and quantization techniques that we have found \n \
\ result in the best performing Llama models. We would like to share\ \ result in the best performing Llama models. We would like to share\
\ ways in which an LLM Ops \n toolchain can be designed by leveraging\ \ ways in which an LLM Ops \n toolchain can be designed by leveraging\
\ our learnings in getting Llama models to power Metas products.\n\n \ \ our learnings in getting Llama models to power Metas products.\n \
\ In addition, the Llama 3 models Meta will release in July should not\ \ <br>\n In addition, the Llama 3 models Meta will release\
\ just be seen as a model, but \n really as a system starting the\ \ in July should not just be seen as a model, but \n really as\
\ transition towards an entity capable of performing \"agentic\" tasks \n \ \ a system starting the transition towards an entity capable of performing \"\
\ which require the ability to act as the central planner and break\ agentic\" tasks \n which require the ability to act as the central\
\ a task down and perform multi-step \n reasoning and call tools\ \ planner and break a task down and perform multi-step \n reasoning\
\ for specific operations. In addition, there needs to be general model-level\ \ and call tools for specific operations. In addition, there needs to be general\
\ \n safety checks as well as task-specific safety checks that\ \ model-level \n safety checks as well as task-specific safety\
\ are performed at a system level. \n\n We are defining the Llama\ \ checks that are performed at a system level. \n <br>\n \
\ Stack as a set of APIs and standards by synthesizing our learnings while \n\ \ We are defining the Llama Stack as a set of APIs and standards by synthesizing\
\ working with Llama models. The APIs are divided into the llama-toolchain-api\ \ our learnings while \n working with Llama models. The APIs are\
\ and the llama-agentic-system-api. \n These APIs provide a coherent\ \ divided into the llama-toolchain-api and the llama-agentic-system-api. \n \
\ way for model developers to fine tune and serve Llama models, and agentic app\ \ These APIs provide a coherent way for model developers to fine\
\ \n developers to leverage all the capabilities of the Llama models\ \ tune and serve Llama models, and agentic app \n developers to\
\ seamlessly. We would like to work with the \n ecosystem to enhance\ \ leverage all the capabilities of the Llama models seamlessly. We would like\
\ and simplify the API. In addition, we will be releasing a plug-in architecture\ \ to work with the \n ecosystem to enhance and simplify the API.\
\ to allow \n creating distributions of the llama stack with different\ \ In addition, we will be releasing a plug-in architecture to allow \n \
\ implementations.\n\n\n This is the specification of the llama\ \ creating distributions of the llama stack with different implementations.\n\
\ <br>\n This is the specification of the llama\
\ stack that provides \n a set of endpoints and their corresponding\ \ stack that provides \n a set of endpoints and their corresponding\
\ interfaces that are tailored to \n best leverage Llama Models.\ \ interfaces that are tailored to \n best leverage Llama Models.\
\ The specification is still in draft and subject to change." \ The specification is still in draft and subject to change."
@ -2052,13 +2053,13 @@ security:
servers: servers:
- url: http://any-hosted-llama-stack.com - url: http://any-hosted-llama-stack.com
tags: tags:
- name: MemoryBanks
- name: Datasets
- name: AgenticSystem
- name: SyntheticDataGeneration - name: SyntheticDataGeneration
- name: PostTraining
- name: RewardScoring
- name: Inference - name: Inference
- name: MemoryBanks
- name: AgenticSystem
- name: Datasets
- name: RewardScoring
- name: PostTraining
- description: <SchemaDefinition schemaRef="#/components/schemas/ShieldConfig" /> - description: <SchemaDefinition schemaRef="#/components/schemas/ShieldConfig" />
name: ShieldConfig name: ShieldConfig
- description: <SchemaDefinition schemaRef="#/components/schemas/AgenticSystemCreateRequest" - description: <SchemaDefinition schemaRef="#/components/schemas/AgenticSystemCreateRequest"