mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-27 14:38:49 +00:00
# What does this PR do? <!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. --> This pull request adds documentation to clarify the differences between the Agents API and the OpenAI Responses API, including use cases for each. It also updates the index page to reference the new documentation. <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> Closes #2368
1.5 KiB
1.5 KiB
AI Application Examples
Llama Stack provides all the building blocks needed to create sophisticated AI applications.
The best way to get started is to look at this notebook which walks through the various APIs (from basic inference, to RAG agents) and how to use them.
Notebook: Building AI Applications
Here are some key topics that will help you build effective agents:
- RAG (Retrieval-Augmented Generation): Learn how to enhance your agents with external knowledge through retrieval mechanisms.
- Agent: Understand the components and design patterns of the Llama Stack agent framework.
- Agent Execution Loop: Understand how agents process information, make decisions, and execute actions in a continuous loop.
- Agents vs Responses API: Learn the differences between the Agents API and Responses API, and when to use each one.
- Tools: Extend your agents' capabilities by integrating with external tools and APIs.
- Evals: Evaluate your agents' effectiveness and identify areas for improvement.
- Telemetry: Monitor and analyze your agents' performance and behavior.
- Safety: Implement guardrails and safety measures to ensure responsible AI behavior.
:hidden:
:maxdepth: 1
rag
agent
agent_execution_loop
responses_vs_agents
tools
evals
telemetry
safety
playground/index