mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
Implement Gunicorn + Uvicorn deployment strategy for Unix systems to provide multi-process parallelism and high-concurrency async request handling. Key Features: - Platform detection: Uses Gunicorn on Unix (Linux/macOS), falls back to Uvicorn on Windows - Worker management: Auto-calculates workers as (2 * CPU cores) + 1 with env var overrides (GUNICORN_WORKERS, WEB_CONCURRENCY) - Production optimizations: * Worker recycling (--max-requests, --max-requests-jitter) prevents memory leaks * Configurable worker connections (default: 1000 per worker) * Connection keepalive for improved performance * Automatic log level mapping from Python logging to Gunicorn * Optional --preload for memory efficiency (disabled by default) - IPv6 support: Proper bind address formatting for IPv6 addresses - SSL/TLS: Passes through certificate configuration from uvicorn_config - Comprehensive logging: Reports workers, capacity, and configuration details - Graceful fallback: Falls back to Uvicorn if Gunicorn not installed Configuration via Environment Variables: - GUNICORN_WORKERS / WEB_CONCURRENCY: Override worker count - GUNICORN_WORKER_CONNECTIONS: Concurrent connections per worker - GUNICORN_TIMEOUT: Worker timeout (default: 120s for async workers) - GUNICORN_KEEPALIVE: Connection keepalive (default: 5s) - GUNICORN_MAX_REQUESTS: Worker recycling interval (default: 10000) - GUNICORN_MAX_REQUESTS_JITTER: Randomize restart timing (default: 1000) - GUNICORN_PRELOAD: Enable app preloading for production (default: false) Based on best practices from: - DeepWiki analysis of encode/uvicorn and benoitc/gunicorn repositories - Medium article: "Mastering Gunicorn and Uvicorn: The Right Way to Deploy FastAPI Applications" Fixes: - Avoids worker multiplication anti-pattern (nested workers) - Proper IPv6 bind address formatting ([::]:port) - Correct Gunicorn parameter names (--keep-alive vs --keepalive) Dependencies: - Added gunicorn>=23.0.0 to pyproject.toml Co-Authored-By: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| docs | ||
| notebooks | ||
| openapi_generator | ||
| scripts | ||
| src | ||
| static | ||
| supplementary | ||
| zero_to_hero_guide | ||
| docusaurus.config.ts | ||
| dog.jpg | ||
| getting_started.ipynb | ||
| getting_started_llama4.ipynb | ||
| getting_started_llama_api.ipynb | ||
| license_header.txt | ||
| original_rfc.md | ||
| package-lock.json | ||
| package.json | ||
| quick_start.ipynb | ||
| README.md | ||
| sidebars.ts | ||
| tsconfig.json | ||
Llama Stack Documentation
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our Github page.
Render locally
From the llama-stack docs/ directory, run the following commands to render the docs locally:
npm install
npm run gen-api-docs all
npm run build
npm run serve
You can open up the docs in your browser at http://localhost:3000
File Import System
This documentation uses remark-code-import to import files directly from the repository, eliminating copy-paste maintenance. Files are automatically embedded during build time.
Importing Code Files
To import Python code (or any code files) with syntax highlighting, use this syntax in .mdx files:
```python file=./demo_script.py title="demo_script.py"
This automatically imports the file content and displays it as a formatted code block with Python syntax highlighting.
**Note:** Paths are relative to the current `.mdx` file location, not the repository root.
### Importing Markdown Files as Content
For importing and rendering markdown files (like CONTRIBUTING.md), use the raw-loader approach:
```jsx
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
import ReactMarkdown from 'react-markdown';
<ReactMarkdown>{Contributing}</ReactMarkdown>
Requirements:
- Install dependencies:
npm install --save-dev raw-loader react-markdown
Path Resolution:
- For
remark-code-import: Paths are relative to the current.mdxfile location - For
raw-loader: Paths are relative to the current.mdxfile location - Use
../to navigate up directories as needed
Content
Try out Llama Stack's capabilities through our detailed Jupyter notebooks:
- Building AI Applications Notebook - A comprehensive guide to building production-ready AI applications using Llama Stack
- Benchmark Evaluations Notebook - Detailed performance evaluations and benchmarking results
- Zero-to-Hero Guide - Step-by-step guide for getting started with Llama Stack