llama-stack-mirror/docs
Kelly Brown 026caa5551
docs: part 1 - fix warnings in documentation generation (#2861)
**Description**
This PR removes some of the warnings when uv builds the docs
- Errors appear when generating docs about .md files not appearing in
toctree. ~~Adding content to the `providers-gen.py ` file that adds `---
orphan: true ---` to to each file.~~. Added a toctree generator to the
`providers-gen.py` file, this gets rid of the errors in the builds.
- Deletes the `_openai_compat` files, extension of PR #2849
- Adds the `files` APIs section to the `providers` toctree on the index
page
- Manually adds the `--- orphan: true ---` to the advanced apis. Ill try
to find a way to modify the providers code gen so it automatically adds
it, but this fixes the errors.
- Adds the `testing.md` to the `contributing` toctree
- Adds `starting_llama_stack_server.md` to `distributions` toctree

There are some other warnings im still looking at but this PR gets rid
of most of the toctree errors
Theres also an issue with the actual distribution-codegen that I can
investigate in another PR. Opened a bug for it here #2873
2025-07-30 10:50:10 -07:00
..
_static feat: add base64 encoded PDF support for OpenAI Chat Completions (#2881) 2025-07-29 06:23:41 -04:00
notebooks feat: Add Nvidia e2e beginner notebook and tool calling notebook (#1964) 2025-06-16 11:29:01 -04:00
openapi_generator feat: Add webmethod for deleting openai responses (#2160) 2025-06-30 11:28:02 +02:00
resources Several documentation fixes and fix link to API reference 2025-02-04 14:00:43 -08:00
source docs: part 1 - fix warnings in documentation generation (#2861) 2025-07-30 10:50:10 -07:00
zero_to_hero_guide feat: consolidate most distros into "starter" (#2516) 2025-07-04 15:58:03 +02:00
conftest.py fix: sleep after notebook test 2025-03-23 14:03:35 -07:00
contbuild.sh Fix broken links with docs 2024-11-22 20:42:17 -08:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb docs: Add quick_start.ipynb notebook equivalent of index.md Quickstart guide (#2128) 2025-07-03 13:55:43 +02:00
getting_started_llama4.ipynb docs: update docs to use "starter" than "ollama" (#2629) 2025-07-05 08:44:57 +05:30
getting_started_llama_api.ipynb docs: Add quick_start.ipynb notebook equivalent of index.md Quickstart guide (#2128) 2025-07-03 13:55:43 +02:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
make.bat feat(pre-commit): enhance pre-commit hooks with additional checks (#2014) 2025-04-30 11:35:49 -07:00
Makefile first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
original_rfc.md chore: remove "rfc" directory and move original rfc to "docs" (#2718) 2025-07-10 14:06:10 -07:00
quick_start.ipynb fix: use OLLAMA_URL to activate Ollama provider in starter (#2963) 2025-07-30 10:11:17 -07:00
README.md feat: add auto-generated CI documentation pre-commit hook (#2890) 2025-07-25 17:57:01 +02:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.

Render locally

From the llama-stack root directory, run the following command to render the docs locally:

uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all

You can open up the docs in your browser at http://localhost:8000

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: