mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-27 22:51:59 +00:00
The term "artifacts" better represents the purpose of this API, which handles outputs generated by API executions, eventually stored objects that can be of served by any storage interface (file, objects). This aligns better with the industry convention of 'artifacts' (build outputs, process results) rather than generic 'files'. 'files' would be appropriate if the goal was to store and retrieve files purely. Additionally, in our context, artifact is a better term since it will handle: * Data produced by SDG (Synthetic Data Generation) - as input * Output of a trained model - as output Signed-off-by: Sébastien Han <seb@redhat.com> |
||
|---|---|---|
| .. | ||
| _static | ||
| notebooks | ||
| openapi_generator | ||
| resources | ||
| source | ||
| zero_to_hero_guide | ||
| conftest.py | ||
| contbuild.sh | ||
| dog.jpg | ||
| getting_started.ipynb | ||
| getting_started_llama4.ipynb | ||
| getting_started_llama_api.ipynb | ||
| license_header.txt | ||
| make.bat | ||
| Makefile | ||
| readme.md | ||
| requirements.txt | ||
Llama Stack Documentation
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.
Render locally
pip install -r requirements.txt
cd docs
python -m sphinx_autobuild source _build
You can open up the docs in your browser at http://localhost:8000
Content
Try out Llama Stack's capabilities through our detailed Jupyter notebooks:
- Building AI Applications Notebook - A comprehensive guide to building production-ready AI applications using Llama Stack
- Benchmark Evaluations Notebook - Detailed performance evaluations and benchmarking results
- Zero-to-Hero Guide - Step-by-step guide for getting started with Llama Stack