Composable building blocks to build Llama Apps https://llama-stack.readthedocs.io
Find a file
dependabot[bot] 9fdfd3a2ad
chore(ui-deps): bump tw-animate-css from 1.2.9 to 1.4.0 in /llama_stack/ui (#3583)
Bumps [tw-animate-css](https://github.com/Wombosvideo/tw-animate-css)
from 1.2.9 to 1.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/Wombosvideo/tw-animate-css/releases">tw-animate-css's
releases</a>.</em></p>
<blockquote>
<h2>v1.4.0</h2>
<h2>Changelog</h2>
<p>902e37a019ffd165ba078e0b3c02634526c54bf0: fix: remove support for
prefix, add new export for prefixed version. Closes <a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/58">#58</a>.
fab2a5bf817605be1976e159976718a83489fc1c: chore: bump version to 1.4.0
and update dependencies
c20dc32e2b532a8e74546879b4ce7d9ce89ba710: fix(build): make transform.ts
accept two arguments</p>
<h2>⚠️ BREAKING CHANGE ⚠️</h2>
<p>Support for Tailwind CSS's prefix option was moved to
<code>tw-animate-css/prefix</code> because it was breaking the
<code>--spacing</code> function. Users requiring prefixes should replace
their import:</p>
<pre lang="diff"><code>- import &quot;tw-animate-css&quot;;
+ import &quot;tw-animate-css/prefix&quot;;
</code></pre>
<p><em>I do not plan to introduce breaking changes like this to
non-major releases in the future. But because more people use spacing
rather than prefixes, reverting the previous version's (obviously
breaking) change seems reasonable.</em></p>
<h2>v1.3.8</h2>
<h2>Changelog</h2>
<ul>
<li>b5ff23a: fix: add support for global CSS variable prefix. Closes <a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/48">#48</a></li>
<li>03e5f12: feat: add support for ng-primitives height variables <a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/56">#56</a>
(thanks <a
href="https://github.com/immohammadjaved"><code>@​immohammadjaved</code></a>)</li>
<li>b076cfb: docs: fix various issues in accordion and collapsible
docs</li>
<li>9485e33: chore: bump version to 1.3.8 and update dependencies</li>
</ul>
<h2>⚠️ BREAKING CHANGE ⚠️</h2>
<p>Adding support for prefixes broke custom spacing. It is recommended
that you skip this version if you do not use Tailwind CSS's prefix
option, and use v1.4.0 instead. If you are actually using prefixes, you
can use a special version supporting prefixes:</p>
<pre lang="diff"><code>- import &quot;tw-animate-css&quot;; /* Version
with spacing support */
+ import &quot;tw-animate-css/prefix&quot;; /* Version with prefix
support */
</code></pre>
<p><em>I do not plan to fix the incompatibility between the spacing and
prefix versions due to time constraints. Feel free to investigate and
open a pull request if you manage to fix it.</em></p>
<h2>v1.3.7</h2>
<h2>Changelog</h2>
<ul>
<li>80dbfcc: feat: add utilities for blur transitions <a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/54">#54</a>
(thanks <a
href="https://github.com/coffeeispower"><code>@​coffeeispower</code></a>)</li>
<li>dc294f9: docs: add upcoming changes warning</li>
<li>c640bb8: chore: update dependencies and package manager version</li>
<li>9e63e34: chore: bump version to 1.3.7</li>
</ul>
<h2>v1.3.6</h2>
<h2>Changelog</h2>
<ul>
<li>58f3396: fix: allow changing animation parameters for ready-to-use
animations</li>
<li>8313476: chore: update dependencies nd package manager version</li>
<li>f81346c: chore: bump version to 1.3.6</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c20dc32e2b"><code>c20dc32</code></a>
fix(build): make transform.ts accept two arguments</li>
<li><a
href="fab2a5bf81"><code>fab2a5b</code></a>
chore: bump version to 1.4.0 and update dependencies</li>
<li><a
href="902e37a019"><code>902e37a</code></a>
fix: remove support for prefix, add new export for prefixed version</li>
<li><a
href="9485e33d99"><code>9485e33</code></a>
chore: bump version to 1.3.8 and update dependencies</li>
<li><a
href="b076cfb04a"><code>b076cfb</code></a>
docs: fix various issues in accordion and collapsible docs</li>
<li><a
href="03e5f12418"><code>03e5f12</code></a>
feat: add support for ng-primitives height variables (<a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/56">#56</a>)</li>
<li><a
href="b5ff23a0d5"><code>b5ff23a</code></a>
fix: add support for global CSS variable prefix. Closes <a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/48">#48</a></li>
<li><a
href="9e63e34286"><code>9e63e34</code></a>
chore: bump version to 1.3.7</li>
<li><a
href="c640bb8933"><code>c640bb8</code></a>
chore: update dependencies and package manager version</li>
<li><a
href="dc294f990a"><code>dc294f9</code></a>
docs: add upcoming changes warning</li>
<li>Additional commits viewable in <a
href="https://github.com/Wombosvideo/tw-animate-css/compare/v1.2.9...v1.4.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tw-animate-css&package-manager=npm_and_yarn&previous-version=1.2.9&new-version=1.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 10:03:26 +02:00
.github feat(ci): use @next branch from llama-stack-client (#3576) 2025-09-27 12:56:51 -07:00
benchmarking/k8s-benchmark chore(perf): run guidellm benchmarks (#3421) 2025-09-24 10:18:33 -07:00
docs feat: Add items and title to ToolParameter/ToolParamDefinition (#3003) 2025-09-27 11:35:29 -07:00
llama_stack chore(ui-deps): bump tw-animate-css from 1.2.9 to 1.4.0 in /llama_stack/ui (#3583) 2025-09-29 10:03:26 +02:00
scripts docs: fix broken links (#3540) 2025-09-24 14:16:31 -07:00
tests feat: Add items and title to ToolParameter/ToolParamDefinition (#3003) 2025-09-27 11:35:29 -07:00
.coveragerc test: Measure and track code coverage (#2636) 2025-07-18 18:08:36 +02:00
.gitignore docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
.pre-commit-config.yaml fix: distro-codegen pre-commit hook file pattern (#3337) 2025-09-04 17:56:32 +02:00
CHANGELOG.md docs: Update changelog (#3343) 2025-09-08 10:01:41 +02:00
CODE_OF_CONDUCT.md Initial commit 2024-07-23 08:32:33 -07:00
CONTRIBUTING.md docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00
coverage.svg test: Measure and track code coverage (#2636) 2025-07-18 18:08:36 +02:00
LICENSE Update LICENSE (#47) 2024-08-29 07:39:50 -07:00
MANIFEST.in chore: MANIFEST maintenance (#3454) 2025-09-27 11:28:11 -07:00
pyproject.toml build: Bump version to 0.2.23 2025-09-26 21:11:51 +00:00
README.md docs: update documentation links (#3459) 2025-09-17 10:37:35 -07:00
SECURITY.md Create SECURITY.md 2024-10-08 13:30:40 -04:00
uv.lock build: Bump version to 0.2.23 2025-09-26 21:11:51 +00:00

Llama Stack

PyPI version PyPI - Downloads License Discord Unit Tests Integration Tests

Quick Start | Documentation | Colab Notebook | Discord

🎉 Llama 4 Support 🎉

We released Version 0.2.0 with support for the Llama 4 herd of models released by Meta.

👋 Click here to see how to run Llama 4 models on Llama Stack


Note you need 8xH100 GPU-host to run these models

pip install -U llama_stack

MODEL="Llama-4-Scout-17B-16E-Instruct"
# get meta url from llama.com
llama model download --source meta --model-id $MODEL --meta-url <META_URL>

# start a llama stack server
INFERENCE_MODEL=meta-llama/$MODEL llama stack build --run --template meta-reference-gpu

# install client to interact with the server
pip install llama-stack-client

CLI

# Run a chat completion
MODEL="Llama-4-Scout-17B-16E-Instruct"

llama-stack-client --endpoint http://localhost:8321 \
inference chat-completion \
--model-id meta-llama/$MODEL \
--message "write a haiku for meta's llama 4 models"

ChatCompletionResponse(
    completion_message=CompletionMessage(content="Whispers in code born\nLlama's gentle, wise heartbeat\nFuture's soft unfold", role='assistant', stop_reason='end_of_turn', tool_calls=[]),
    logprobs=None,
    metrics=[Metric(metric='prompt_tokens', value=21.0, unit=None), Metric(metric='completion_tokens', value=28.0, unit=None), Metric(metric='total_tokens', value=49.0, unit=None)]
)

Python SDK

from llama_stack_client import LlamaStackClient

client = LlamaStackClient(base_url=f"http://localhost:8321")

model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
prompt = "Write a haiku about coding"

print(f"User> {prompt}")
response = client.inference.chat_completion(
    model_id=model_id,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": prompt},
    ],
)
print(f"Assistant> {response.completion_message.content}")

As more providers start supporting Llama 4, you can use them in Llama Stack as well. We are adding to the list. Stay tuned!

🚀 One-Line Installer 🚀

To try Llama Stack locally, run:

curl -LsSf https://github.com/meta-llama/llama-stack/raw/main/scripts/install.sh | bash

Overview

Llama Stack standardizes the core building blocks that simplify AI application development. It codifies best practices across the Llama ecosystem. More specifically, it provides

  • Unified API layer for Inference, RAG, Agents, Tools, Safety, Evals, and Telemetry.
  • Plugin architecture to support the rich ecosystem of different API implementations in various environments, including local development, on-premises, cloud, and mobile.
  • Prepackaged verified distributions which offer a one-stop solution for developers to get started quickly and reliably in any environment.
  • Multiple developer interfaces like CLI and SDKs for Python, Typescript, iOS, and Android.
  • Standalone applications as examples for how to build production-grade AI applications with Llama Stack.
Llama Stack

Llama Stack Benefits

  • Flexible Options: Developers can choose their preferred infrastructure without changing APIs and enjoy flexible deployment choices.
  • Consistent Experience: With its unified APIs, Llama Stack makes it easier to build, test, and deploy AI applications with consistent application behavior.
  • Robust Ecosystem: Llama Stack is already integrated with distribution partners (cloud providers, hardware vendors, and AI-focused companies) that offer tailored infrastructure, software, and services for deploying Llama models.

By reducing friction and complexity, Llama Stack empowers developers to focus on what they do best: building transformative generative AI applications.

API Providers

Here is a list of the various API providers and available distributions that can help developers get started easily with Llama Stack. Please checkout for full list

API Provider Builder Environments Agents Inference VectorIO Safety Telemetry Post Training Eval DatasetIO
Meta Reference Single Node
SambaNova Hosted
Cerebras Hosted
Fireworks Hosted
AWS Bedrock Hosted
Together Hosted
Groq Hosted
Ollama Single Node
TGI Hosted/Single Node
NVIDIA NIM Hosted/Single Node
ChromaDB Hosted/Single Node
Milvus Hosted/Single Node
Qdrant Hosted/Single Node
Weaviate Hosted/Single Node
SQLite-vec Single Node
PG Vector Single Node
PyTorch ExecuTorch On-device iOS
vLLM Single Node
OpenAI Hosted
Anthropic Hosted
Gemini Hosted
WatsonX Hosted
HuggingFace Single Node
TorchTune Single Node
NVIDIA NEMO Hosted
NVIDIA Hosted

Note

: Additional providers are available through external packages. See External Providers documentation.

Distributions

A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario - you can begin with a local development setup (eg. ollama) and seamlessly transition to production (eg. Fireworks) without changing your application code. Here are some of the distributions we support:

Distribution Llama Stack Docker Start This Distribution
Starter Distribution llamastack/distribution-starter Guide
Meta Reference llamastack/distribution-meta-reference-gpu Guide
PostgreSQL llamastack/distribution-postgres-demo

Documentation

Please checkout our Documentation page for more details.

Llama Stack Client SDKs

Language Client SDK Package
Python llama-stack-client-python PyPI version
Swift llama-stack-client-swift Swift Package Index
Typescript llama-stack-client-typescript NPM version
Kotlin llama-stack-client-kotlin Maven version

Check out our client SDKs for connecting to a Llama Stack server in your preferred language, you can choose from python, typescript, swift, and kotlin programming languages to quickly build your applications.

You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.

🌟 GitHub Star History

Star History

Star History Chart

Contributors

Thanks to all of our amazing contributors!