mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
This commit refactors the Batches protocol to use Pydantic request models for both create_batch and list_batches methods, improving consistency, readability, and maintainability. - create_batch now accepts a single CreateBatchRequest parameter instead of individual arguments. This aligns the protocol with FastAPI’s request model pattern, allowing the router to pass the request object directly without unpacking parameters. Provider implementations now access fields via request.input_file_id, request.endpoint, etc. - list_batches now accepts a single ListBatchesRequest parameter, replacing individual query parameters. The model includes after and limit fields with proper OpenAPI descriptions. FastAPI automatically parses query parameters into the model for GET requests, keeping router code clean. Provider implementations access fields via request.after and request.limit. Signed-off-by: Sébastien Han <seb@redhat.com> |
||
|---|---|---|
| .. | ||
| docs | ||
| notebooks | ||
| scripts | ||
| src | ||
| static | ||
| supplementary | ||
| zero_to_hero_guide | ||
| docusaurus.config.ts | ||
| dog.jpg | ||
| getting_started.ipynb | ||
| getting_started_llama4.ipynb | ||
| getting_started_llama_api.ipynb | ||
| license_header.txt | ||
| original_rfc.md | ||
| package-lock.json | ||
| package.json | ||
| quick_start.ipynb | ||
| README.md | ||
| sidebars.ts | ||
| tsconfig.json | ||
Llama Stack Documentation
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our Github page.
Render locally
From the llama-stack docs/ directory, run the following commands to render the docs locally:
npm install
npm run gen-api-docs all
npm run build
npm run serve
You can open up the docs in your browser at http://localhost:3000
File Import System
This documentation uses remark-code-import to import files directly from the repository, eliminating copy-paste maintenance. Files are automatically embedded during build time.
Importing Code Files
To import Python code (or any code files) with syntax highlighting, use this syntax in .mdx files:
```python file=./demo_script.py title="demo_script.py"
This automatically imports the file content and displays it as a formatted code block with Python syntax highlighting.
**Note:** Paths are relative to the current `.mdx` file location, not the repository root.
### Importing Markdown Files as Content
For importing and rendering markdown files (like CONTRIBUTING.md), use the raw-loader approach:
```jsx
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
import ReactMarkdown from 'react-markdown';
<ReactMarkdown>{Contributing}</ReactMarkdown>
Requirements:
- Install dependencies:
npm install --save-dev raw-loader react-markdown
Path Resolution:
- For
remark-code-import: Paths are relative to the current.mdxfile location - For
raw-loader: Paths are relative to the current.mdxfile location - Use
../to navigate up directories as needed
Content
Try out Llama Stack's capabilities through our detailed Jupyter notebooks:
- Building AI Applications Notebook - A comprehensive guide to building production-ready AI applications using Llama Stack
- Benchmark Evaluations Notebook - Detailed performance evaluations and benchmarking results
- Zero-to-Hero Guide - Step-by-step guide for getting started with Llama Stack