llama-stack-mirror/docs
Ashwin Bharambe 42414a1a1b
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 0s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Vector IO Integration Tests / test-matrix (push) Failing after 3s
Test Llama Stack Build / generate-matrix (push) Successful in 3s
Python Package Build Test / build (3.12) (push) Failing after 1s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 3s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 3s
Test External API and Providers / test-external (venv) (push) Failing after 4s
Unit Tests / unit-tests (3.13) (push) Failing after 3s
Test Llama Stack Build / build (push) Failing after 4s
Python Package Build Test / build (3.13) (push) Failing after 21s
Test Llama Stack Build / build-single-provider (push) Failing after 25s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 27s
Unit Tests / unit-tests (3.12) (push) Failing after 22s
API Conformance Tests / check-schema-compatibility (push) Successful in 33s
UI Tests / ui-tests (22) (push) Successful in 39s
Pre-commit / pre-commit (push) Successful in 1m12s
fix(logging): disable console telemetry sink by default (#3623)
The current span processing dumps so much junk on the console that it
makes actual understanding of what is going on in the server impossible.
I am killing the console sink as a default. If you want, you are always
free to change your run.yaml to add it.

Before: 
<img width="1877" height="1107" alt="image"
src="https://github.com/user-attachments/assets/3a7ad261-e2ba-4d40-9820-fcc282c8df37"
/>

After:
<img width="1919" height="470" alt="image"
src="https://github.com/user-attachments/assets/bc7cf763-fba9-4e95-a4b5-f65f6d1c5332"
/>
2025-09-30 14:58:05 -07:00
..
docs fix(logging): disable console telemetry sink by default (#3623) 2025-09-30 14:58:05 -07:00
notebooks chore: unpublish /inference/chat-completion (#3609) 2025-09-30 11:00:42 -07:00
openapi_generator feat(files): fix expires_after API shape (#3604) 2025-09-29 21:29:15 -07:00
src docs: frontpage update (#3620) 2025-09-30 14:11:00 -07:00
static feat: add support for require_approval argument when creating response (#3608) 2025-09-30 14:18:34 -07:00
zero_to_hero_guide docs: update safety notebook (#3617) 2025-09-30 14:11:12 -07:00
docusaurus.config.ts fix: docs deployment URL (#3556) 2025-09-25 15:41:12 -07:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb chore: unpublish /inference/chat-completion (#3609) 2025-09-30 11:00:42 -07:00
getting_started_llama4.ipynb chore: unpublish /inference/chat-completion (#3609) 2025-09-30 11:00:42 -07:00
getting_started_llama_api.ipynb chore: unpublish /inference/chat-completion (#3609) 2025-09-30 11:00:42 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
original_rfc.md chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
package-lock.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
package.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
quick_start.ipynb docs: update documentation links (#3459) 2025-09-17 10:37:35 -07:00
README.md docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
sidebars.ts docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
tsconfig.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our Github page.

Render locally

From the llama-stack docs/ directory, run the following commands to render the docs locally:

npm install
npm run gen-api-docs all
npm run build
npm run serve

You can open up the docs in your browser at http://localhost:3000

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: