mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-15 14:08:00 +00:00
# What does this PR do? 1. Added a simple mock openai-compat server that serves chat/completion 2. Add a benchmark server in EKS that includes mock inference server 3. Add locust (https://locust.io/) file for load testing ## Test Plan bash apply.sh kubectl port-forward service/locust-web-ui 8089:8089 Go to localhost:8089 to start a load test <img width="1392" height="334" alt="image" src="https://github.com/user-attachments/assets/d6aa3deb-583a-42ed-889b-751262b8e91c" /> <img width="1362" height="881" alt="image" src="https://github.com/user-attachments/assets/6a28b9b4-05e6-44e2-b504-07e60c12d35e" /> |
||
---|---|---|
.. | ||
_static | ||
notebooks | ||
openapi_generator | ||
resources | ||
source | ||
zero_to_hero_guide | ||
conftest.py | ||
contbuild.sh | ||
dog.jpg | ||
getting_started.ipynb | ||
getting_started_llama4.ipynb | ||
getting_started_llama_api.ipynb | ||
license_header.txt | ||
make.bat | ||
Makefile | ||
original_rfc.md | ||
quick_start.ipynb | ||
README.md |
Llama Stack Documentation
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.
Render locally
From the llama-stack root directory, run the following command to render the docs locally:
uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all
You can open up the docs in your browser at http://localhost:8000
Content
Try out Llama Stack's capabilities through our detailed Jupyter notebooks:
- Building AI Applications Notebook - A comprehensive guide to building production-ready AI applications using Llama Stack
- Benchmark Evaluations Notebook - Detailed performance evaluations and benchmarking results
- Zero-to-Hero Guide - Step-by-step guide for getting started with Llama Stack