mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 19:04:19 +00:00
# What does this PR do? In short, provide a summary of what this PR does and why. Usually, the relevant context should be present in a linked issue. - [ ] Addresses issue (#issue) ## Test Plan Please describe: - tests you ran to verify your changes with result summaries. - provide instructions so it can be reproduced. ## Sources Please link relevant resources if necessary. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
29 lines
746 B
Markdown
29 lines
746 B
Markdown
# Building AI Applications
|
|
|
|
Llama Stack provides all the building blocks needed to create sophisticated AI applications.
|
|
|
|
The best way to get started is to look at this notebook which walks through the various APIs (from basic inference, to RAG agents) and how to use them.
|
|
|
|
**Notebook**: [Building AI Applications](https://github.com/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb)
|
|
|
|
Here are some key topics that will help you build effective agents:
|
|
|
|
- **[Agent Execution Loop](agent_execution_loop)**
|
|
- **[RAG](rag)**
|
|
- **[Safety](safety)**
|
|
- **[Tools](tools)**
|
|
- **[Telemetry](telemetry)**
|
|
- **[Evals](evals)**
|
|
|
|
|
|
```{toctree}
|
|
:hidden:
|
|
:maxdepth: 1
|
|
|
|
agent_execution_loop
|
|
rag
|
|
safety
|
|
tools
|
|
telemetry
|
|
evals
|
|
```
|