llama-stack-mirror/docs/source/index.md
Hardik Shah 65f07c3d63
Update Documentation (#838)
# What does this PR do?

Update README and other documentation


## Before submitting

- [X] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-22 20:38:52 -08:00

3.9 KiB

Llama Stack

Llama Stack defines and standardizes the core building blocks needed to bring generative AI applications to market. It provides a unified set of APIs with implementations from leading service providers, enabling seamless transitions between development and production environments.

We focus on making it easy to build production applications with the Llama model family - from the latest Llama 3.3 to specialized models like Llama Guard for safety.

:alt: Llama Stack
:width: 400px

Our goal is to provide pre-packaged implementations (aka "distributions") which can be run in a variety of deployment environments. LlamaStack can assist you in your entire app development lifecycle - start iterating on local, mobile or desktop and seamlessly transition to on-prem or public cloud deployments. At every point in this transition, the same set of APIs and the same developer experience is available.

  • New to Llama Stack? Start with the Introduction to understand our motivation and vision.
  • Ready to build? Check out the Quick Start to get started.
  • Need specific providers? Browse Distributions to see all the options available.
  • Want to contribute? See the Contributing guide.

Available SDKs

We have a number of client-side SDKs available for different languages.

Language Client SDK Package
Python llama-stack-client-python PyPI version
Swift llama-stack-client-swift Swift Package Index
Node llama-stack-client-node NPM version
Kotlin llama-stack-client-kotlin Maven version

Supported Llama Stack Implementations

A number of "adapters" are available for some popular Inference and Memory (Vector Store) providers. For other APIs (particularly Safety and Agents), we provide reference implementations you can use to get started. We expect this list to grow over time. We are slowly onboarding more providers to the ecosystem as we get more confidence in the APIs.

API Provider Environments Agents Inference Memory Safety Telemetry
Meta Reference Single Node Y Y Y Y Y
Cerebras Single Node Y
Fireworks Hosted Y Y Y
AWS Bedrock Hosted Y Y
Together Hosted Y Y Y
Ollama Single Node Y
TGI Hosted and Single Node Y
NVIDIA NIM Hosted and Single Node Y
Chroma Single Node Y
Postgres Single Node Y
PyTorch ExecuTorch On-device iOS Y Y
PyTorch ExecuTorch On-device Android Y
:hidden:
:maxdepth: 3

self
introduction/index
getting_started/index
concepts/index
distributions/index
building_applications/index
benchmark_evaluations/index
playground/index
contributing/index
references/index