mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-03 19:57:35 +00:00
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 0s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Vector IO Integration Tests / test-matrix (push) Failing after 3s
Test Llama Stack Build / generate-matrix (push) Successful in 3s
Python Package Build Test / build (3.12) (push) Failing after 1s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 3s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 3s
Test External API and Providers / test-external (venv) (push) Failing after 4s
Unit Tests / unit-tests (3.13) (push) Failing after 3s
Test Llama Stack Build / build (push) Failing after 4s
Python Package Build Test / build (3.13) (push) Failing after 21s
Test Llama Stack Build / build-single-provider (push) Failing after 25s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 27s
Unit Tests / unit-tests (3.12) (push) Failing after 22s
API Conformance Tests / check-schema-compatibility (push) Successful in 33s
UI Tests / ui-tests (22) (push) Successful in 39s
Pre-commit / pre-commit (push) Successful in 1m12s
The current span processing dumps so much junk on the console that it makes actual understanding of what is going on in the server impossible. I am killing the console sink as a default. If you want, you are always free to change your run.yaml to add it. Before: <img width="1877" height="1107" alt="image" src="https://github.com/user-attachments/assets/3a7ad261-e2ba-4d40-9820-fcc282c8df37" /> After: <img width="1919" height="470" alt="image" src="https://github.com/user-attachments/assets/bc7cf763-fba9-4e95-a4b5-f65f6d1c5332" /> |
||
---|---|---|
.. | ||
docs | ||
notebooks | ||
openapi_generator | ||
src | ||
static | ||
zero_to_hero_guide | ||
docusaurus.config.ts | ||
dog.jpg | ||
getting_started.ipynb | ||
getting_started_llama4.ipynb | ||
getting_started_llama_api.ipynb | ||
license_header.txt | ||
original_rfc.md | ||
package-lock.json | ||
package.json | ||
quick_start.ipynb | ||
README.md | ||
sidebars.ts | ||
tsconfig.json |
Llama Stack Documentation
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our Github page.
Render locally
From the llama-stack docs/
directory, run the following commands to render the docs locally:
npm install
npm run gen-api-docs all
npm run build
npm run serve
You can open up the docs in your browser at http://localhost:3000
Content
Try out Llama Stack's capabilities through our detailed Jupyter notebooks:
- Building AI Applications Notebook - A comprehensive guide to building production-ready AI applications using Llama Stack
- Benchmark Evaluations Notebook - Detailed performance evaluations and benchmarking results
- Zero-to-Hero Guide - Step-by-step guide for getting started with Llama Stack