Bumps [chromadb](https://github.com/chroma-core/chroma) from 1.0.16 to 1.0.20. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/chroma-core/chroma/releases">chromadb's releases</a>.</em></p> <blockquote> <h2>1.0.20</h2> <p>Version: <code>1.0.20</code> Git ref: <code>refs/tags/1.0.20</code> Build Date: <code>2025-08-18T17:04</code> PIP Package: <code>chroma-1.0.20.tar.gz</code> Github Container Registry Image: <code>:1.0.20</code> DockerHub Image: <code>:1.0.20</code></p> <h2>What's Changed</h2> <ul> <li>[RELEASE] 1.0.20 by <a href="https://github.com/itaismith"><code>@itaismith</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5303">chroma-core/chroma#5303</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/chroma-core/chroma/compare/1.0.19...1.0.20">https://github.com/chroma-core/chroma/compare/1.0.19...1.0.20</a></p> <h2>1.0.18</h2> <p>Version: <code>1.0.18</code> Git ref: <code>refs/tags/1.0.18</code> Build Date: <code>2025-08-18T08:09</code> PIP Package: <code>chroma-1.0.18.tar.gz</code> Github Container Registry Image: <code>:1.0.18</code> DockerHub Image: <code>:1.0.18</code></p> <h2>What's Changed</h2> <ul> <li>[CHORE]: Added short descriptions to CLI commands by <a href="https://github.com/tazarov"><code>@tazarov</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5217">chroma-core/chroma#5217</a></li> <li>[ENH] Use AVX in distance calculations by <a href="https://github.com/jairad26"><code>@jairad26</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5258">chroma-core/chroma#5258</a></li> <li>[ENH] Auto-set tenant, scoped database in python CloudClient by <a href="https://github.com/jairad26"><code>@jairad26</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5026">chroma-core/chroma#5026</a></li> <li>[PERF]: Modify get_range to return an iterator by <a href="https://github.com/sanketkedia"><code>@sanketkedia</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5256">chroma-core/chroma#5256</a></li> <li>[BUG] Mark dirty on rollback of cursor to guarantee compaction picks it up. by <a href="https://github.com/rescrv"><code>@rescrv</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5265">chroma-core/chroma#5265</a></li> <li>[ENH]: add metric for component queue depth & change dispatcher queue depth metric buckets by <a href="https://github.com/codetheweb"><code>@codetheweb</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5261">chroma-core/chroma#5261</a></li> <li>[ENH]: add garbage collection CLI for manual garbage collection by <a href="https://github.com/codetheweb"><code>@codetheweb</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5250">chroma-core/chroma#5250</a></li> <li>[DOC] Clean up DEVELOP.md by <a href="https://github.com/kylediaz"><code>@kylediaz</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5270">chroma-core/chroma#5270</a></li> <li>[ENH]: Further optimize query on getCollections when databases pkey is fully specified by <a href="https://github.com/tanujnay112"><code>@tanujnay112</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5268">chroma-core/chroma#5268</a></li> <li>[ENH] Update Rust to allow build with AVX when flag is set by <a href="https://github.com/jairad26"><code>@jairad26</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5269">chroma-core/chroma#5269</a></li> <li>[ENH]: Fix test_add flake by <a href="https://github.com/sanketkedia"><code>@sanketkedia</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5272">chroma-core/chroma#5272</a></li> <li>[BUG]: Revert "[ENH]: Further optimize query on getCollections when databases pkey is fully specified (<a href="https://redirect.github.com/chroma-core/chroma/issues/5268">#5268</a>)" by <a href="https://github.com/tanujnay112"><code>@tanujnay112</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5273">chroma-core/chroma#5273</a></li> <li>[BLD] Add maturin to dev dependencies by <a href="https://github.com/kylediaz"><code>@kylediaz</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5271">chroma-core/chroma#5271</a></li> <li>[ENH]: Optimize GetCollections and remove usage of raw gorm by <a href="https://github.com/tanujnay112"><code>@tanujnay112</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5274">chroma-core/chroma#5274</a></li> <li>[ENH]: add config param to garbage collector to control how many collections are fetched from SysDb by <a href="https://github.com/codetheweb"><code>@codetheweb</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5275">chroma-core/chroma#5275</a></li> <li>[ENH] Reject version files without paths. by <a href="https://github.com/rescrv"><code>@rescrv</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5267">chroma-core/chroma#5267</a></li> <li>[ENH] Enable getting a collection by CRN by <a href="https://github.com/drewkim"><code>@drewkim</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5244">chroma-core/chroma#5244</a></li> <li>[BUG] CompactionError did not proxy should_trace_error by <a href="https://github.com/rescrv"><code>@rescrv</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5282">chroma-core/chroma#5282</a></li> <li>[BUG] Resolve deadlock in system crate? by <a href="https://github.com/rescrv"><code>@rescrv</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5283">chroma-core/chroma#5283</a></li> <li>[ENH] Complete the NAC metrics for the write half. by <a href="https://github.com/rescrv"><code>@rescrv</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5278">chroma-core/chroma#5278</a></li> <li>[BUG]: fix missing node in constructed version graph for garbage collection by <a href="https://github.com/codetheweb"><code>@codetheweb</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5284">chroma-core/chroma#5284</a></li> <li>[BUG] Fix test flake from 5283. by <a href="https://github.com/rescrv"><code>@rescrv</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5287">chroma-core/chroma#5287</a></li> <li>[BUG]: Don't GC hnsw if it is empty by <a href="https://github.com/sanketkedia"><code>@sanketkedia</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5295">chroma-core/chroma#5295</a></li> <li>[ENH] Sync before flushing by <a href="https://github.com/HammadB"><code>@HammadB</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5296">chroma-core/chroma#5296</a></li> <li>[DOC] update quota limits by <a href="https://github.com/philipithomas"><code>@philipithomas</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5297">chroma-core/chroma#5297</a></li> <li>[BUG] Fix CLI copy offset by <a href="https://github.com/itaismith"><code>@itaismith</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5288">chroma-core/chroma#5288</a></li> <li>[ENH] Add support for default space in create coll config by <a href="https://github.com/jairad26"><code>@jairad26</code></a> in <a href="https://redirect.github.com/chroma-core/chroma/pull/5293">chroma-core/chroma#5293</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
---|---|---|
.github | ||
docs | ||
llama_stack | ||
scripts | ||
tests | ||
.coveragerc | ||
.gitignore | ||
.pre-commit-config.yaml | ||
.readthedocs.yaml | ||
CHANGELOG.md | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
coverage.svg | ||
LICENSE | ||
MANIFEST.in | ||
pyproject.toml | ||
README.md | ||
SECURITY.md | ||
uv.lock |
Llama Stack
Quick Start | Documentation | Colab Notebook | Discord
✨🎉 Llama 4 Support 🎉✨
We released Version 0.2.0 with support for the Llama 4 herd of models released by Meta.
👋 Click here to see how to run Llama 4 models on Llama Stack
Note you need 8xH100 GPU-host to run these models
pip install -U llama_stack
MODEL="Llama-4-Scout-17B-16E-Instruct"
# get meta url from llama.com
llama model download --source meta --model-id $MODEL --meta-url <META_URL>
# start a llama stack server
INFERENCE_MODEL=meta-llama/$MODEL llama stack build --run --template meta-reference-gpu
# install client to interact with the server
pip install llama-stack-client
CLI
# Run a chat completion
MODEL="Llama-4-Scout-17B-16E-Instruct"
llama-stack-client --endpoint http://localhost:8321 \
inference chat-completion \
--model-id meta-llama/$MODEL \
--message "write a haiku for meta's llama 4 models"
ChatCompletionResponse(
completion_message=CompletionMessage(content="Whispers in code born\nLlama's gentle, wise heartbeat\nFuture's soft unfold", role='assistant', stop_reason='end_of_turn', tool_calls=[]),
logprobs=None,
metrics=[Metric(metric='prompt_tokens', value=21.0, unit=None), Metric(metric='completion_tokens', value=28.0, unit=None), Metric(metric='total_tokens', value=49.0, unit=None)]
)
Python SDK
from llama_stack_client import LlamaStackClient
client = LlamaStackClient(base_url=f"http://localhost:8321")
model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
prompt = "Write a haiku about coding"
print(f"User> {prompt}")
response = client.inference.chat_completion(
model_id=model_id,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt},
],
)
print(f"Assistant> {response.completion_message.content}")
As more providers start supporting Llama 4, you can use them in Llama Stack as well. We are adding to the list. Stay tuned!
🚀 One-Line Installer 🚀
To try Llama Stack locally, run:
curl -LsSf https://github.com/meta-llama/llama-stack/raw/main/scripts/install.sh | bash
Overview
Llama Stack standardizes the core building blocks that simplify AI application development. It codifies best practices across the Llama ecosystem. More specifically, it provides
- Unified API layer for Inference, RAG, Agents, Tools, Safety, Evals, and Telemetry.
- Plugin architecture to support the rich ecosystem of different API implementations in various environments, including local development, on-premises, cloud, and mobile.
- Prepackaged verified distributions which offer a one-stop solution for developers to get started quickly and reliably in any environment.
- Multiple developer interfaces like CLI and SDKs for Python, Typescript, iOS, and Android.
- Standalone applications as examples for how to build production-grade AI applications with Llama Stack.
Llama Stack Benefits
- Flexible Options: Developers can choose their preferred infrastructure without changing APIs and enjoy flexible deployment choices.
- Consistent Experience: With its unified APIs, Llama Stack makes it easier to build, test, and deploy AI applications with consistent application behavior.
- Robust Ecosystem: Llama Stack is already integrated with distribution partners (cloud providers, hardware vendors, and AI-focused companies) that offer tailored infrastructure, software, and services for deploying Llama models.
By reducing friction and complexity, Llama Stack empowers developers to focus on what they do best: building transformative generative AI applications.
API Providers
Here is a list of the various API providers and available distributions that can help developers get started easily with Llama Stack. Please checkout for full list
API Provider Builder | Environments | Agents | Inference | VectorIO | Safety | Telemetry | Post Training | Eval | DatasetIO |
---|---|---|---|---|---|---|---|---|---|
Meta Reference | Single Node | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
SambaNova | Hosted | ✅ | ✅ | ||||||
Cerebras | Hosted | ✅ | |||||||
Fireworks | Hosted | ✅ | ✅ | ✅ | |||||
AWS Bedrock | Hosted | ✅ | ✅ | ||||||
Together | Hosted | ✅ | ✅ | ✅ | |||||
Groq | Hosted | ✅ | |||||||
Ollama | Single Node | ✅ | |||||||
TGI | Hosted/Single Node | ✅ | |||||||
NVIDIA NIM | Hosted/Single Node | ✅ | ✅ | ||||||
ChromaDB | Hosted/Single Node | ✅ | |||||||
Milvus | Hosted/Single Node | ✅ | |||||||
Qdrant | Hosted/Single Node | ✅ | |||||||
Weaviate | Hosted/Single Node | ✅ | |||||||
SQLite-vec | Single Node | ✅ | |||||||
PG Vector | Single Node | ✅ | |||||||
PyTorch ExecuTorch | On-device iOS | ✅ | ✅ | ||||||
vLLM | Single Node | ✅ | |||||||
OpenAI | Hosted | ✅ | |||||||
Anthropic | Hosted | ✅ | |||||||
Gemini | Hosted | ✅ | |||||||
WatsonX | Hosted | ✅ | |||||||
HuggingFace | Single Node | ✅ | ✅ | ||||||
TorchTune | Single Node | ✅ | |||||||
NVIDIA NEMO | Hosted | ✅ | ✅ | ✅ | ✅ | ✅ | |||
NVIDIA | Hosted | ✅ | ✅ | ✅ |
Note
: Additional providers are available through external packages. See External Providers documentation.
Distributions
A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario - you can begin with a local development setup (eg. ollama) and seamlessly transition to production (eg. Fireworks) without changing your application code. Here are some of the distributions we support:
Distribution | Llama Stack Docker | Start This Distribution |
---|---|---|
Starter Distribution | llamastack/distribution-starter | Guide |
Meta Reference | llamastack/distribution-meta-reference-gpu | Guide |
PostgreSQL | llamastack/distribution-postgres-demo |
Documentation
Please checkout our Documentation page for more details.
- CLI references
- llama (server-side) CLI Reference: Guide for using the
llama
CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution. - llama (client-side) CLI Reference: Guide for using the
llama-stack-client
CLI, which allows you to query information about the distribution.
- llama (server-side) CLI Reference: Guide for using the
- Getting Started
- Quick guide to start a Llama Stack server.
- Jupyter notebook to walk-through how to use simple text and vision inference llama_stack_client APIs
- The complete Llama Stack lesson Colab notebook of the new Llama 3.2 course on Deeplearning.ai.
- A Zero-to-Hero Guide that guide you through all the key components of llama stack with code samples.
- Contributing
- Adding a new API Provider to walk-through how to add a new API provider.
Llama Stack Client SDKs
Language | Client SDK | Package |
---|---|---|
Python | llama-stack-client-python | |
Swift | llama-stack-client-swift | |
Typescript | llama-stack-client-typescript | |
Kotlin | llama-stack-client-kotlin |
Check out our client SDKs for connecting to a Llama Stack server in your preferred language, you can choose from python, typescript, swift, and kotlin programming languages to quickly build your applications.
You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.
🌟 GitHub Star History
Star History
✨ Contributors
Thanks to all of our amazing contributors!