mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-03 19:57:35 +00:00
# What does this PR do? - Migrates the remaining documentation sections to the new documentation format <!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. --> <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> ## Test Plan - Partial migration <!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* -->
28 lines
1.3 KiB
Text
28 lines
1.3 KiB
Text
---
|
|
title: APIs
|
|
description: Available REST APIs and planned capabilities in Llama Stack
|
|
sidebar_label: APIs
|
|
sidebar_position: 1
|
|
---
|
|
|
|
# APIs
|
|
|
|
A Llama Stack API is described as a collection of REST endpoints. We currently support the following APIs:
|
|
|
|
- **Inference**: run inference with a LLM
|
|
- **Safety**: apply safety policies to the output at a Systems (not only model) level
|
|
- **Agents**: run multi-step agentic workflows with LLMs with tool usage, memory (RAG), etc.
|
|
- **DatasetIO**: interface with datasets and data loaders
|
|
- **Scoring**: evaluate outputs of the system
|
|
- **Eval**: generate outputs (via Inference or Agents) and perform scoring
|
|
- **VectorIO**: perform operations on vector stores, such as adding documents, searching, and deleting documents
|
|
- **Telemetry**: collect telemetry data from the system
|
|
- **Post Training**: fine-tune a model
|
|
- **Tool Runtime**: interact with various tools and protocols
|
|
- **Responses**: generate responses from an LLM using this OpenAI compatible API.
|
|
|
|
We are working on adding a few more APIs to complete the application lifecycle. These will include:
|
|
- **Batch Inference**: run inference on a dataset of inputs
|
|
- **Batch Agents**: run agents on a dataset of inputs
|
|
- **Synthetic Data Generation**: generate synthetic data for model development
|
|
- **Batches**: OpenAI-compatible batch management for inference
|