llama-stack-mirror/docs
Sébastien Han 084d8400b1
chore: remove double routes
We didn't follow the double route approach when we moved from
`v1/openai/v1` to `/v1`. In order to remain consistent, let's abandon
the double routes. Even if that breaks for 0.3.0. We will do the right
announcements.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-10-01 15:33:41 +02:00
..
docs fix(logging): disable console telemetry sink by default (#3623) 2025-09-30 14:58:05 -07:00
notebooks chore: unpublish /inference/chat-completion (#3609) 2025-09-30 11:00:42 -07:00
openapi_generator feat(files): fix expires_after API shape (#3604) 2025-09-29 21:29:15 -07:00
src docs: frontpage update (#3620) 2025-09-30 14:11:00 -07:00
static chore: remove double routes 2025-10-01 15:33:41 +02:00
zero_to_hero_guide docs: update safety notebook (#3617) 2025-09-30 14:11:12 -07:00
docusaurus.config.ts fix: docs deployment URL (#3556) 2025-09-25 15:41:12 -07:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb chore: unpublish /inference/chat-completion (#3609) 2025-09-30 11:00:42 -07:00
getting_started_llama4.ipynb chore: unpublish /inference/chat-completion (#3609) 2025-09-30 11:00:42 -07:00
getting_started_llama_api.ipynb chore: unpublish /inference/chat-completion (#3609) 2025-09-30 11:00:42 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
original_rfc.md chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
package-lock.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
package.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
quick_start.ipynb docs: update documentation links (#3459) 2025-09-17 10:37:35 -07:00
README.md docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
sidebars.ts docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
tsconfig.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our Github page.

Render locally

From the llama-stack docs/ directory, run the following commands to render the docs locally:

npm install
npm run gen-api-docs all
npm run build
npm run serve

You can open up the docs in your browser at http://localhost:3000

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: