mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-23 08:33:09 +00:00
1.1 KiB
1.1 KiB
Resources
Some of these APIs are associated with a set of Resources. Here is the mapping of APIs to resources:
- Inference, Eval and Post Training are associated with
Modelresources. - Safety is associated with
Shieldresources. - Tool Runtime is associated with
ToolGroupresources. - DatasetIO is associated with
Datasetresources. - VectorIO is associated with
VectorDBresources. - Scoring is associated with
ScoringFunctionresources. - Eval is associated with
ModelandBenchmarkresources.
Furthermore, we allow these resources to be federated across multiple providers. For example, you may have some Llama models served by Fireworks while others are served by AWS Bedrock. Regardless, they will all work seamlessly with the same uniform Inference API provided by Llama Stack.
:class: tip
Given this architecture, it is necessary for the Stack to know which provider to use for a given resource. This means you need to explicitly _register_ resources (including models) before you can use them with the associated APIs.