llama-stack-mirror/docs
Ben Browning c4c67ac775 chore: cleanups from review feedback on openai api docs
This moves the Models section to the top of the supported APIs, along
with a brief explanation so that it's not just a bare section with
just a small code sample.

This also adjusts the MyST directives to use code fences instead of
markdown directives, as that allows word wrapping to work as normal
instead of it rendering as a single long line in the GitHub UI and
such. The actual rendered HTML content is identical, but this makes it
a bit easier to review.

And, the warning from the Responses API structured output section is
removed because we've now merged support for that.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-06-03 20:46:31 -04:00
..
_static feat(responses): add output_text delta events to responses (#2265) 2025-05-27 13:07:14 -07:00
notebooks docs: fix evals notebook preview (#2277) 2025-05-27 15:18:20 +02:00
openapi_generator feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
resources Several documentation fixes and fix link to API reference 2025-02-04 14:00:43 -08:00
source chore: cleanups from review feedback on openai api docs 2025-06-03 20:46:31 -04:00
zero_to_hero_guide feat: add additional logging to llama stack build (#1689) 2025-04-30 11:06:24 -07:00
conftest.py fix: sleep after notebook test 2025-03-23 14:03:35 -07:00
contbuild.sh Fix broken links with docs 2024-11-22 20:42:17 -08:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb chore: remove last instances of code-interpreter provider (#2143) 2025-05-12 10:54:43 -07:00
getting_started_llama4.ipynb docs: llama4 getting started nb (#1878) 2025-04-06 18:51:34 -07:00
getting_started_llama_api.ipynb feat: add api.llama provider, llama-guard-4 model (#2058) 2025-04-29 10:07:41 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
make.bat feat(pre-commit): enhance pre-commit hooks with additional checks (#2014) 2025-04-30 11:35:49 -07:00
Makefile first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
readme.md chore: use groups when running commands (#2298) 2025-05-28 09:13:16 -07:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.

Render locally

From the llama-stack root directory, run the following command to render the docs locally:

uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all

You can open up the docs in your browser at http://localhost:8000

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: