# What does this PR do? - add notebooks - restructure docs ## Test Plan <img width="1201" alt="image" src="https://github.com/user-attachments/assets/3f9a09d9-b5ec-406c-b44b-e896e340d209" /> <img width="1202" alt="image" src="https://github.com/user-attachments/assets/fdc1173f-2417-4ad6-845e-4f265fc40a31" /> <img width="1201" alt="image" src="https://github.com/user-attachments/assets/b1e4e2a8-acf6-4ef2-a2fc-00d26cf32359" /> ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
3.9 KiB
Llama Stack
Llama Stack defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations.
:alt: Llama Stack
:width: 400px
Our goal is to provide pre-packaged implementations which can be operated in a variety of deployment environments: developers start iterating with Desktops or their mobile devices and can seamlessly transition to on-prem or public cloud deployments. At every point in this transition, the same set of APIs and the same developer experience is available.
The Stack APIs are rapidly improving but still a work-in-progress. We invite feedback as well as direct contributions.
Quick Links
- New to Llama Stack? Start with the Introduction to understand our motivation and vision.
- Ready to build? Check out the Quick Start to get started.
- Need specific providers? Browse Distributions to see all the options available.
- Want to contribute? See the Contributing guide.
Available SDKs
We have a number of client-side SDKs available for different languages.
Language | Client SDK | Package |
---|---|---|
Python | llama-stack-client-python | |
Swift | llama-stack-client-swift | |
Node | llama-stack-client-node | |
Kotlin | llama-stack-client-kotlin |
Supported Llama Stack Implementations
A number of "adapters" are available for some popular Inference and Memory (Vector Store) providers. For other APIs (particularly Safety and Agents), we provide reference implementations you can use to get started. We expect this list to grow over time. We are slowly onboarding more providers to the ecosystem as we get more confidence in the APIs.
API Provider | Environments | Agents | Inference | Memory | Safety | Telemetry |
---|---|---|---|---|---|---|
Meta Reference | Single Node | Y | Y | Y | Y | Y |
Cerebras | Single Node | Y | ||||
Fireworks | Hosted | Y | Y | Y | ||
AWS Bedrock | Hosted | Y | Y | |||
Together | Hosted | Y | Y | Y | ||
Ollama | Single Node | Y | ||||
TGI | Hosted and Single Node | Y | ||||
NVIDIA NIM | Hosted and Single Node | Y | ||||
Chroma | Single Node | Y | ||||
Postgres | Single Node | Y | ||||
PyTorch ExecuTorch | On-device iOS | Y | Y | |||
PyTorch ExecuTorch | On-device Android | Y |
:hidden:
:maxdepth: 3
introduction/index
getting_started/index
concepts/index
distributions/index
building_applications/index
benchmark_evaluations/index
playground/index
contributing/index
references/index