llama-stack-mirror/docs
skamenan7 9613c63ca6 docs: clarify run.yaml files are starting points for customization
- Add new documentation section on customizing run.yaml files
- Clarify that generated run.yaml files are templates, not production configs
- Add guidance on customization best practices and common scenarios
- Update existing documentation to reference customization guide
- Improve clarity around run.yaml file usage for better user experience

This enhancement makes it clearer for users that run.yaml files are meant
to be customized for production use, improving the overall developer experience.
2025-07-14 11:22:15 -04:00
..
_static feat: add input validation for search mode of rag query config (#2275) 2025-07-14 09:11:34 -04:00
notebooks feat: Add Nvidia e2e beginner notebook and tool calling notebook (#1964) 2025-06-16 11:29:01 -04:00
openapi_generator feat: Add webmethod for deleting openai responses (#2160) 2025-06-30 11:28:02 +02:00
resources Several documentation fixes and fix link to API reference 2025-02-04 14:00:43 -08:00
source docs: clarify run.yaml files are starting points for customization 2025-07-14 11:22:15 -04:00
zero_to_hero_guide feat: consolidate most distros into "starter" (#2516) 2025-07-04 15:58:03 +02:00
conftest.py fix: sleep after notebook test 2025-03-23 14:03:35 -07:00
contbuild.sh Fix broken links with docs 2024-11-22 20:42:17 -08:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb docs: Add quick_start.ipynb notebook equivalent of index.md Quickstart guide (#2128) 2025-07-03 13:55:43 +02:00
getting_started_llama4.ipynb docs: update docs to use "starter" than "ollama" (#2629) 2025-07-05 08:44:57 +05:30
getting_started_llama_api.ipynb docs: Add quick_start.ipynb notebook equivalent of index.md Quickstart guide (#2128) 2025-07-03 13:55:43 +02:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
make.bat feat(pre-commit): enhance pre-commit hooks with additional checks (#2014) 2025-04-30 11:35:49 -07:00
Makefile first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
original_rfc.md chore: remove "rfc" directory and move original rfc to "docs" (#2718) 2025-07-10 14:06:10 -07:00
quick_start.ipynb docs: update docs to use "starter" than "ollama" (#2629) 2025-07-05 08:44:57 +05:30
readme.md chore: use groups when running commands (#2298) 2025-05-28 09:13:16 -07:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.

Render locally

From the llama-stack root directory, run the following command to render the docs locally:

uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all

You can open up the docs in your browser at http://localhost:8000

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: