llama-stack-mirror/docs
Ben Browning 92fdf6d0c9 Use our own pydantic models for OpenAI Server APIs
Importing the models from the OpenAI client library required a
top-level dependency on the openai python package, and also was
incompatible with our API generation code due to some quirks in how
the OpenAI pydantic models are defined.

So, this creates our own stubs of those pydantic models so that we're
in more direct control of our API surface for this OpenAI-compatible
API, so that it works with our code generation, and so that the openai
python client isn't a hard requirement of Llama Stack's API.
2025-04-09 15:47:02 -04:00
..
_static Use our own pydantic models for OpenAI Server APIs 2025-04-09 15:47:02 -04:00
notebooks fix: Misleading code in Llama Stack Benchmark Evals notebook (#1774) 2025-03-25 07:04:47 -07:00
openapi_generator feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
resources Several documentation fixes and fix link to API reference 2025-02-04 14:00:43 -08:00
source chore: simplify running the demo UI (#1907) 2025-04-09 11:22:29 -07:00
zero_to_hero_guide fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
conftest.py fix: sleep after notebook test 2025-03-23 14:03:35 -07:00
contbuild.sh Fix broken links with docs 2024-11-22 20:42:17 -08:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
getting_started_llama4.ipynb docs: llama4 getting started nb (#1878) 2025-04-06 18:51:34 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
make.bat first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
Makefile first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
readme.md docs: fixing sphinx imports (#1884) 2025-04-05 14:21:45 -07:00
requirements.txt docs: fixing sphinx imports (#1884) 2025-04-05 14:21:45 -07:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.

Render locally

pip install -r requirements.txt
cd docs
python -m sphinx_autobuild source _build

You can open up the docs in your browser at http://localhost:8000

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: