llama-stack-mirror/docs
Sébastien Han 44e36ce48d
chore: use JSON instead of YAML for openapi generation
JSON has a few advantages over YAML in this context:

* No extra dependency: Removed ruamel.yaml; using the standard library
  json module.
* Simpler code: No YAML formatting configuration (indent, flow style,
  string presentation, etc.). JSON serialization is straightforward.
* Faster generation: JSON serialization is typically faster and more
  predictable than YAML formatting.
* Native OpenAPI format: JSON is the native OpenAPI format. Many tools
  prefer JSON, reducing potential compatibility issues.
* Better tooling support: JSON is widely supported. Tools like oasdiff,
  OpenAPI validators, and code generators work well with JSON.
* Fewer formatting edge cases: YAML can have edge cases (multiline
  strings, special characters, quoting, scalars etc). JSON avoids these.

All the tools consumming the YAMLs have been updated namely oasdiff for
conformance tests, docusaurus config and the genrator.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-11-03 18:05:48 +01:00
..
docs feat: Add rerank API for NVIDIA Inference Provider (#3329) 2025-10-30 21:42:09 -07:00
notebooks docs: A getting started notebook featuring simple agent examples. (#3955) 2025-10-29 14:13:34 -04:00
openapi_generator chore: use JSON instead of YAML for openapi generation 2025-11-03 18:05:48 +01:00
scripts feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
src feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
static chore: use JSON instead of YAML for openapi generation 2025-11-03 18:05:48 +01:00
supplementary docs: adding supplementary markdown content to API specs (#3632) 2025-10-01 10:15:30 -07:00
zero_to_hero_guide chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
docusaurus.config.ts chore: use JSON instead of YAML for openapi generation 2025-11-03 18:05:48 +01:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb chore: update getting_started (#3875) 2025-10-21 11:09:45 -07:00
getting_started_llama4.ipynb chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
getting_started_llama_api.ipynb chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
original_rfc.md chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
package-lock.json feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
package.json feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
quick_start.ipynb chore: update quick_start (#3878) 2025-10-21 11:33:23 -07:00
README.md feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
sidebars.ts fix(docs): remove leftover telemetry sidebar section (#3961) 2025-10-29 11:20:13 -04:00
tsconfig.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our Github page.

Render locally

From the llama-stack docs/ directory, run the following commands to render the docs locally:

npm install
npm run gen-api-docs all
npm run build
npm run serve

You can open up the docs in your browser at http://localhost:3000

File Import System

This documentation uses remark-code-import to import files directly from the repository, eliminating copy-paste maintenance. Files are automatically embedded during build time.

Importing Code Files

To import Python code (or any code files) with syntax highlighting, use this syntax in .mdx files:

```python file=./demo_script.py title="demo_script.py"

This automatically imports the file content and displays it as a formatted code block with Python syntax highlighting.

**Note:** Paths are relative to the current `.mdx` file location, not the repository root.

### Importing Markdown Files as Content

For importing and rendering markdown files (like CONTRIBUTING.md), use the raw-loader approach:

```jsx
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
import ReactMarkdown from 'react-markdown';

<ReactMarkdown>{Contributing}</ReactMarkdown>

Requirements:

  • Install dependencies: npm install --save-dev raw-loader react-markdown

Path Resolution:

  • For remark-code-import: Paths are relative to the current .mdx file location
  • For raw-loader: Paths are relative to the current .mdx file location
  • Use ../ to navigate up directories as needed

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: