forked from phoenix-oss/llama-stack-mirror
Summary: - [new] Agent concepts (session, turn) - [new] how to write custom tools - [new] non-streaming API and how to get outputs - [update] remaining `memory` -> `rag` rename - [new] note importance of `instructions` Test Plan: read
1.3 KiB
1.3 KiB
Building AI Applications
Llama Stack provides all the building blocks needed to create sophisticated AI applications.
The best way to get started is to look at this notebook which walks through the various APIs (from basic inference, to RAG agents) and how to use them.
Notebook: Building AI Applications
Here are some key topics that will help you build effective agents:
- Agent: Understand the components and design patterns of the Llama Stack agent framework.
- Agent Execution Loop: Understand how agents process information, make decisions, and execute actions in a continuous loop.
- RAG (Retrieval-Augmented Generation): Learn how to enhance your agents with external knowledge through retrieval mechanisms.
- Tools: Extend your agents' capabilities by integrating with external tools and APIs.
- Evals: Evaluate your agents' effectiveness and identify areas for improvement.
- Telemetry: Monitor and analyze your agents' performance and behavior.
- Safety: Implement guardrails and safety measures to ensure responsible AI behavior.
:hidden:
:maxdepth: 1
agent
agent_execution_loop
rag
tools
telemetry
evals
advanced_agent_patterns
safety