mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-04 02:03:44 +00:00
|
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 1s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Integration Tests (Replay) / generate-matrix (push) Successful in 3s
API Conformance Tests / check-schema-compatibility (push) Successful in 11s
Python Package Build Test / build (3.12) (push) Successful in 18s
Python Package Build Test / build (3.13) (push) Successful in 19s
Test External API and Providers / test-external (venv) (push) Failing after 28s
UI Tests / ui-tests (22) (push) Successful in 33s
Vector IO Integration Tests / test-matrix (push) Failing after 40s
Unit Tests / unit-tests (3.13) (push) Failing after 1m19s
Unit Tests / unit-tests (3.12) (push) Failing after 1m46s
Pre-commit / pre-commit (push) Successful in 2m49s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 2m42s
https://github.com/digitalbazaar/forge/security/advisories/GHSA-554w-wpv2-vw27 Taking on a direct dependency is not great 1. We don't actually use node-forge - it's only needed by webpack-dev-server's dependency (selfsigned) for generating self-signed certificates during development 2. Adding a direct dependency would be misleading - it suggests our code uses node-forge when it doesn't In the dependency chain: ``` @docusaurus/core@3.8.1 └─ webpack-dev-server@4.15.2 └─ selfsigned@2.4.1 └─ node-forge@1.3.1 ``` Latest Docusaurus (3.9.2) uses webpack-dev-server 5.2.2, which still uses selfsigned 2.4.1 So, overriding dependency on node-forge is the only option |
||
|---|---|---|
| .. | ||
| docs | ||
| notebooks | ||
| scripts | ||
| src | ||
| static | ||
| supplementary | ||
| zero_to_hero_guide | ||
| docusaurus.config.ts | ||
| dog.jpg | ||
| getting_started.ipynb | ||
| getting_started_llama4.ipynb | ||
| getting_started_llama_api.ipynb | ||
| license_header.txt | ||
| original_rfc.md | ||
| package-lock.json | ||
| package.json | ||
| quick_start.ipynb | ||
| README.md | ||
| sidebars.ts | ||
| tsconfig.json | ||
Llama Stack Documentation
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our Github page.
Render locally
From the llama-stack docs/ directory, run the following commands to render the docs locally:
npm install
npm run gen-api-docs all
npm run build
npm run serve
You can open up the docs in your browser at http://localhost:3000
File Import System
This documentation uses remark-code-import to import files directly from the repository, eliminating copy-paste maintenance. Files are automatically embedded during build time.
Importing Code Files
To import Python code (or any code files) with syntax highlighting, use this syntax in .mdx files:
```python file=./demo_script.py title="demo_script.py"
This automatically imports the file content and displays it as a formatted code block with Python syntax highlighting.
**Note:** Paths are relative to the current `.mdx` file location, not the repository root.
### Importing Markdown Files as Content
For importing and rendering markdown files (like CONTRIBUTING.md), use the raw-loader approach:
```jsx
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
import ReactMarkdown from 'react-markdown';
<ReactMarkdown>{Contributing}</ReactMarkdown>
Requirements:
- Install dependencies:
npm install --save-dev raw-loader react-markdown
Path Resolution:
- For
remark-code-import: Paths are relative to the current.mdxfile location - For
raw-loader: Paths are relative to the current.mdxfile location - Use
../to navigate up directories as needed
Content
Try out Llama Stack's capabilities through our detailed Jupyter notebooks:
- Building AI Applications Notebook - A comprehensive guide to building production-ready AI applications using Llama Stack
- Benchmark Evaluations Notebook - Detailed performance evaluations and benchmarking results
- Zero-to-Hero Guide - Step-by-step guide for getting started with Llama Stack