llama-stack-mirror/docs
Luis Tomas Bolivar f7243a1e2c Fix BadRequestError due to unvalid max_tokens
This patch ensures if max tokens is not defined it is set to None.
This avoid some providers to fail, as they don't have protection for
it being set to 0

Issue: #3666
2025-10-23 11:22:52 +02:00
..
docs refactor(k8s): update run configs to v2 storage and registered_resources structure 2025-10-22 15:33:50 -07:00
notebooks chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
openapi_generator revert: "chore(cleanup)!: remove tool_runtime.rag_tool" (#3877) 2025-10-21 11:22:06 -07:00
src chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
static Fix BadRequestError due to unvalid max_tokens 2025-10-23 11:22:52 +02:00
supplementary docs: adding supplementary markdown content to API specs (#3632) 2025-10-01 10:15:30 -07:00
zero_to_hero_guide chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
docusaurus.config.ts docs: add favicon and mobile styling (#3650) 2025-10-02 10:42:54 +02:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb chore: update getting_started (#3875) 2025-10-21 11:09:45 -07:00
getting_started_llama4.ipynb chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
getting_started_llama_api.ipynb chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
original_rfc.md chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
package-lock.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
package.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
quick_start.ipynb chore: update quick_start (#3878) 2025-10-21 11:33:23 -07:00
README.md docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
sidebars.ts docs: Update docs navbar config (#3653) 2025-10-02 16:48:38 +02:00
tsconfig.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our Github page.

Render locally

From the llama-stack docs/ directory, run the following commands to render the docs locally:

npm install
npm run gen-api-docs all
npm run build
npm run serve

You can open up the docs in your browser at http://localhost:3000

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: