# What does this PR do?
Adds supplementary static content to root API spec pages. This is useful for giving context behind a specific API group, adding information on supported features or work in progress, etc.
This PR introduces supplementary information for Agents (experimental, deprecated) and Responses (stable) APIs.
<!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. -->
<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->
## Test Plan
Documentation server renders rich static content for the Agents API group:

<!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* -->
# What does this PR do?
First step towards cleaning up the API reference section of the docs.
- Separates API reference into 3 sections: stable (`v1`), experimental (`v1alpha` and `v1beta`), and deprecated (`deprecated=True`)
- Each section is accessible via the dropdown menu and `docs/api-overview`
<img width="1237" height="321" alt="Screenshot 2025-09-30 at 5 47 30 PM" src="https://github.com/user-attachments/assets/fe0e498c-b066-46ed-a48e-4739d3b6724c" />
<img width="860" height="510" alt="Screenshot 2025-09-30 at 5 47 49 PM" src="https://github.com/user-attachments/assets/a92a8d8c-94bf-42d5-9f5b-b47bb2b14f9c" />
- Deprecated APIs: Added styling to the sidebar, and a notice on the endpoint pages
<img width="867" height="428" alt="Screenshot 2025-09-30 at 5 47 43 PM" src="https://github.com/user-attachments/assets/9e6e050d-c782-461b-8084-5ff6496d7bd9" />
Closes#3628
TODO in follow-up PRs:
- Add the ability to annotate API groups with supplementary content (so we can have longer descriptions of complex APIs like Responses)
- Clean up docstrings to show API endpoints (or short semantic titles) in the sidebar
## Test Plan
- Local testing
- Made sure API conformance test still passes
# What does this PR do?
Given the rapidly changing nature of Llama Stack's APIs and the need to have clean, user-friendly API documentation, we want to split the API reference into 3 main buckets; stable, experimental and deprecated. The most straightforward way to do it is to have several automatically generated doctrees, which introduces some complexity in testing APIs for backwards compatibility.
This PR updates the API conformance test to handle cases where the API schema is split into several files; it does not change the testing criteria.
<!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. -->
<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->
## Test Plan
No developer-facing changes (all existing tests should pass)
<!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* -->
# What does this PR do?
- categories like "core::server" is not recognized so it's level is not
set by 'all=debug'
- removed spammy telemetry debug logging
## Test Plan
test server launched with LLAMA_STACK_LOGGING='all=debug'
# What does this PR do?
if the PR title has `!` or the footer of the commit has `BREAKING
CHANGE:`, skip conformance. This is documented in the API leveling
proposal
Signed-off-by: Charlie Doern <cdoern@redhat.com>
# What does this PR do?
level the following APIs, keeping their old routes around as well until
0.4.0
1. datasetio to v1beta: used primarily by eval and training. Given that
training is v1alpha, and eval is v1alpha, datasetio is likely to change
in structure as real usages of the API spin up. Register,unregister, and
iter dataset is sparsely implemented meaning the shape of that route is
likely to change.
2. telemetry to v1alpha: telemetry has been going through many changes.
for example query_metrics was not even implemented until recently and
had to change its shape to work. putting this in v1beta will allow us to
fix functionality like OTEL, sqlite, etc. The routes themselves are set,
but the structure might change a bit
Signed-off-by: Charlie Doern <cdoern@redhat.com>
# What does this PR do?
When a model decides to use an MCP tool call that requires no arguments,
it sets the `arguments` field to `None`. This causes the user to see a
`400 bad requst error` due to validation errors down the stack because
this field gets removed when being parsed by an openai compatible
inference provider like vLLM
This PR ensures that, as soon as the tool call args are accumulated
while streaming, we check to ensure no tool call function arguments are
set to None - if they are we replace them with "{}"
<!-- If resolving an issue, uncomment and update the line below -->
Closes#3456
## Test Plan
Added new unit test to verify that any tool calls with function
arguments set to `None` get handled correctly
---------
Signed-off-by: Jaideep Rao <jrao@redhat.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
# What does this PR do?
Fireworks doesn't allow repsonse_format with tool use. The default
response format is 'text' anyway, so we can safely omit.
## Test Plan
Below script failed without the change, runs after.
```
#!/usr/bin/env python3
"""
Script to test Responses API with kubernetes-mcp-server.
This script:
1. Connects to the llama stack server
2. Uses the Responses API with MCP tools
3. Asks for the list of Kubernetes namespaces using the kubernetes-mcp-server
"""
import json
from openai import OpenAI
# Connect to the llama stack server
base_url = "http://localhost:8321/v1"
client = OpenAI(base_url=base_url, api_key="fake")
# Define the MCP tool pointing to the kubernetes-mcp-server
# The kubernetes-mcp-server is running on port 3000 with SSE endpoint at /sse
mcp_server_url = "http://localhost:3000/sse"
tools = [
{
"type": "mcp",
"server_label": "k8s",
"server_url": mcp_server_url,
}
]
# Create a response request asking for k8s namespaces
print("Sending request to list Kubernetes namespaces...")
print(f"Using MCP server at: {mcp_server_url}")
print("Available tools will be listed automatically by the MCP server.")
print()
response = client.responses.create(
# model="meta-llama/Llama-3.2-3B-Instruct", # Using the vllm model
model="fireworks/accounts/fireworks/models/llama4-scout-instruct-basic",
# model="openai/gpt-4o",
input="what are all the Kubernetes namespaces? Use tool call to `namespaces_list`. make sure to adhere to the tool calling format UNDER ALL CIRCUMSTANCES.",
tools=tools,
stream=False,
)
print("\n" + "=" * 80)
print("RESPONSE OUTPUT:")
print("=" * 80)
# Print the output
for i, output in enumerate(response.output):
print(f"\n[Output {i + 1}] Type: {output.type}")
if output.type == "mcp_list_tools":
print(f" Server: {output.server_label}")
print(f" Tools available: {[t.name for t in output.tools]}")
elif output.type == "mcp_call":
print(f" Tool called: {output.name}")
print(f" Arguments: {output.arguments}")
print(f" Result: {output.output}")
if output.error:
print(f" Error: {output.error}")
elif output.type == "message":
print(f" Role: {output.role}")
print(f" Content: {output.content}")
print("\n" + "=" * 80)
print("FINAL RESPONSE TEXT:")
print("=" * 80)
print(response.output_text)
```
# What does this PR do?
This PR adds support for the require_approval on an mcp tool definition
passed to create response in the Responses API. This allows the caller
to indicate whether they want to approve calls to that server, or let
them be called without approval.
Closes#3443
## Test Plan
Tested both approval and denial.
Added automated integration test for both cases.
---------
Signed-off-by: Gordon Sim <gsim@redhat.com>
Co-authored-by: Matthew Farrellee <matt@cs.wisc.edu>
# What does this PR do?
* Updates the safety guide in Zero to Hero series to use Moderations API
and the latest safety models
* Fixes an image link
Closes#2557
## Test Plan
* Manual testing
# What does this PR do?
* Adds canonical project information and links to client SDK / k8s
operator / app examples repos to the front page
* Fixes some button rendering errors
Closes#3618
## Test Plan
Local rebuild of the documentation server
https://github.com/llamastack/llama-stack/pull/3604 broke multipart form
data field parsing for the Files API since it changed its shape -- so as
to match the API exactly to the OpenAI spec even in the generated client
code.
The underlying reason is that multipart/form-data cannot transport
structured nested fields. Each field must be str-serialized. The client
(specifically the OpenAI client whose behavior we must match),
transports sub-fields as `expires_after[anchor]` and
`expires_after[seconds]`, etc. We must be able to handle these fields
somehow on the server without compromising the shape of the YAML spec.
This PR "fixes" this by adding a dependency to convert the data. The
main trade-off here is that we must add this `Depends()` annotation on
every provider implementation for Files. This is a headache, but a much
more reasonable one (in my opinion) given the alternatives.
## Test Plan
Tests as shown in
https://github.com/llamastack/llama-stack/pull/3604#issuecomment-3351090653
pass.
# What does this PR do?
agents is likely to be deprecated in favor of responses. Lets level it
as alpha to indicate the lack of longterm support
keep v1 route for backwards compat.
Closes#3611
Signed-off-by: Charlie Doern <cdoern@redhat.com>
# What does this PR do?
migrate safety api implementation from /inference/chat-completion to
/v1/chat/completions
## Test Plan
ci w/ recordings
---------
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
# What does this PR do?
Add llamastack + CrewAI integration example notebook
<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->
## Test Plan
<!-- Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.* -->
Tested in local jupyternotebook and it works.
# What does this PR do?
Fixes error:
```
[ERROR] Error executing endpoint route='/v1/openai/v1/responses'
method='post': Error code: 400 - {'error': {'message': "Invalid schema for function 'pods_exec': In context=('properties', 'command'), array
schema missing items.", 'type': 'invalid_request_error', 'param': 'tools[7].function.parameters', 'code': 'invalid_function_parameters'}}
```
From script:
```
#!/usr/bin/env python3
"""
Script to test Responses API with kubernetes-mcp-server.
This script:
1. Connects to the llama stack server
2. Uses the Responses API with MCP tools
3. Asks for the list of Kubernetes namespaces using the kubernetes-mcp-server
"""
import json
from openai import OpenAI
# Connect to the llama stack server
base_url = "http://localhost:8321/v1/openai/v1"
client = OpenAI(base_url=base_url, api_key="fake")
# Define the MCP tool pointing to the kubernetes-mcp-server
# The kubernetes-mcp-server is running on port 3000 with SSE endpoint at /sse
mcp_server_url = "http://localhost:3000/sse"
tools = [
{
"type": "mcp",
"server_label": "k8s",
"server_url": mcp_server_url,
}
]
# Create a response request asking for k8s namespaces
print("Sending request to list Kubernetes namespaces...")
print(f"Using MCP server at: {mcp_server_url}")
print("Available tools will be listed automatically by the MCP server.")
print()
response = client.responses.create(
# model="meta-llama/Llama-3.2-3B-Instruct", # Using the vllm model
model="openai/gpt-4o",
input="what are all the Kubernetes namespaces? Use tool call to `namespaces_list`. make sure to adhere to the tool calling format.",
tools=tools,
stream=False,
)
print("\n" + "=" * 80)
print("RESPONSE OUTPUT:")
print("=" * 80)
# Print the output
for i, output in enumerate(response.output):
print(f"\n[Output {i + 1}] Type: {output.type}")
if output.type == "mcp_list_tools":
print(f" Server: {output.server_label}")
print(f" Tools available: {[t.name for t in output.tools]}")
elif output.type == "mcp_call":
print(f" Tool called: {output.name}")
print(f" Arguments: {output.arguments}")
print(f" Result: {output.output}")
if output.error:
print(f" Error: {output.error}")
elif output.type == "message":
print(f" Role: {output.role}")
print(f" Content: {output.content}")
print("\n" + "=" * 80)
print("FINAL RESPONSE TEXT:")
print("=" * 80)
print(response.output_text)
```
## Test Plan
new unit tests
script now runs successfully
The `/v1/openai/v1` prefix is annoying and now unnecessary given our
clearer focus on how to think about the API surface.
Let's kill it for the 0.3.0 update.
To make client-side changes feasible, we will do this in two parts. This
part adds a new route (sans `/openai/v1`) so the existing client
continues to work since the server supports both.
The next PR will be client-side (Stainless) changes which I will be
making shortly.
The final PR will remove the `/openai/v1` routes.
Note that all these changes will happen rapidly within this release
cycle. The entire set _will be backwards incompatible_.
# What does this PR do?
Refs: https://github.com/llamastack/llama-stack/issues/3420
When telemetry is enabled the router uncondionally expects the usage
attribute to be availble and fails if it is not present.
Usage is not currently being requested by litellm_openai_mixin.py for
streaming requests when using the responses API which means that
providers like vertexai fail if telemetry is enabled and streaming is
used.
This is part of the required fix. Other part is in liteLLM, will plan to
submit PR for that soon.
## Test Plan
I applied this change along with the change for litellm in a llama stack
deployment and validated that I could make streaming requests through
the responses API to a gemini model and they would succeed instead of
failing due to the missing usage attribute when telemetry is enabled.
Signed-off-by: Michael Dawson <midawson@redhat.com>
# What does this PR do?
now that /v1/inference/completion has been removed, no docs should refer
to it
this cleans up remaining references
## Test Plan
ci
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
# What does this PR do?
* Updates image paths for images in docs/resources/ to proper static
image locations
## Test Plan
* `npm run build` builds documentation properly
# What does this PR do?
move the eval=inline::meta-reference implementation to use
openai_completion/openai_chat_completion
note: this breaks backward compatibility if eval setup used sampling
params' repetition_penalty or strategy
## Test Plan
ci w/ new recordings
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
# What does this PR do?
we skip embedding tests when the embedding_model_id isn't provided. same
for completion / chat tests when text_model_id isn't given.
instead of failing safety tests when a shield_id isn't provided, we'll
skip them too.
## Test Plan
ci
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
# What does this PR do?
inference/rerank is the one route in the API intended to not be
deprecated. Level it as v1alpha.
Additionally, remove `experimental` and opt to instead use `v1alpha`
which itself implies an experimental state based on the original
proposal
Signed-off-by: Charlie Doern <cdoern@redhat.com>
# What does this PR do?
<!-- Provide a short summary of what this PR does and why. Link to
relevant issues if applicable. -->
This PR fix#3300 by adding mime type of application/json support in
[agent_instance.py](4a59961a6c/llama_stack/providers/inline/agents/meta_reference/agent_instance.py (L923))
<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[3300] -->
## Test Plan
<!-- Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.* -->
all related pytest passed, see log:
```
./scripts/unit-tests.sh tests/unit/providers/agent/test_get_raw_document_text.py -vvv
/Users/kaiwu/work/kaiwu/llama-stack/.venv/bin/python3
Uninstalled 22 packages in 5.65s
Installed 47 packages in 1.24s
================= test session starts =================
platform darwin -- Python 3.12.9, pytest-8.4.2, pluggy-1.6.0 -- /Users/kaiwu/work/kaiwu/llama-stack/.venv/bin/python
cachedir: .pytest_cache
metadata: {'Python': '3.12.9', 'Platform': 'macOS-15.6.1-arm64-arm-64bit', 'Packages': {'pytest': '8.4.2', 'pluggy': '1.6.0'}, 'Plugins': {'anyio': '4.9.0', 'html': '4.1.1', 'socket': '0.7.0', 'asyncio': '1.1.0', 'json-report': '1.5.0', 'timeout': '2.4.0', 'metadata': '3.1.1', 'cov': '6.2.1', 'nbval': '0.11.0'}}
rootdir: /Users/kaiwu/work/kaiwu/llama-stack
configfile: pyproject.toml
plugins: anyio-4.9.0, html-4.1.1, socket-0.7.0, asyncio-1.1.0, json-report-1.5.0, timeout-2.4.0, metadata-3.1.1, cov-6.2.1, nbval-0.11.0
asyncio: mode=Mode.AUTO, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
collected 14 items
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_supports_text_mime_types PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_supports_yaml_mime_type PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_supports_deprecated_text_yaml_with_warning PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_deprecated_text_yaml_with_url PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_deprecated_text_yaml_with_text_content_item PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_supports_json_mime_type PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_with_json_url PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_with_json_text_content_item PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_rejects_unsupported_mime_types PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_with_url_content PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_with_yaml_url PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_with_text_content_item PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_with_yaml_text_content_item PASSED
tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_rejects_unexpected_content_type PASSED
================ slowest 10 durations =================
0.00s call tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_deprecated_text_yaml_with_url
0.00s call tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_rejects_unsupported_mime_types
0.00s call tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_rejects_unexpected_content_type
0.00s setup tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_supports_text_mime_types
0.00s teardown tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_supports_text_mime_types
0.00s call tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_with_yaml_url
0.00s call tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_with_url_content
0.00s teardown tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_rejects_unsupported_mime_types
0.00s call tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_with_json_url
0.00s call tests/unit/providers/agent/test_get_raw_document_text.py::test_get_raw_document_text_supports_text_mime_types
================= 14 passed in 0.14s ==================
Generating coverage report...
Wrote HTML report to htmlcov-3.12/index.html
```
Reverts llamastack/llama-stack#3576
When I edit Stainless and codegen succeeds, the `next` branch is updated
directly. It provides us no chance to see if there might be something
unideal going on. If something is wrong, all CI will start breaking
immediately. This is not ideal. I will likely create another staging
branch `next-release` or something to accomodate the special workflow
that Stainless requires.
# What does this PR do?
Mirroring the same changes that was used for inference_store:
https://github.com/llamastack/llama-stack/pull/3383
Will follow up with a shared internal API for managing these write
queues.
## Test Plan
existing tests
# What does this PR do?
the nvidia datastore tests were running when the datastore was not
configured. they would always fail.
this introduces a skip when the nvidia datastore is not configured.
## Test Plan
ci
Bumps [actions/cache](https://github.com/actions/cache) from 4.2.4 to
4.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v4.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add note on runner versions by <a
href="https://github.com/GhadimiR"><code>@GhadimiR</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1642">actions/cache#1642</a></li>
<li>Prepare <code>v4.3.0</code> release by <a
href="https://github.com/Link"><code>@Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1655">actions/cache#1655</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/GhadimiR"><code>@GhadimiR</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1642">actions/cache#1642</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4...v4.3.0">https://github.com/actions/cache/compare/v4...v4.3.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h3>4.3.0</h3>
<ul>
<li>Bump <code>@actions/cache</code> to <a
href="https://redirect.github.com/actions/toolkit/pull/2132">v4.1.0</a></li>
</ul>
<h3>4.2.4</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.5</li>
</ul>
<h3>4.2.3</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.3 (obfuscates SAS token in
debug logs for cache entries)</li>
</ul>
<h3>4.2.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.2</li>
</ul>
<h3>4.2.1</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.1</li>
</ul>
<h3>4.2.0</h3>
<p>TLDR; The cache backend service has been rewritten from the ground up
for improved performance and reliability. <a
href="https://github.com/actions/cache">actions/cache</a> now integrates
with the new cache service (v2) APIs.</p>
<p>The new service will gradually roll out as of <strong>February 1st,
2025</strong>. The legacy service will also be sunset on the same date.
Changes in these release are <strong>fully backward
compatible</strong>.</p>
<p><strong>We are deprecating some versions of this action</strong>. We
recommend upgrading to version <code>v4</code> or <code>v3</code> as
soon as possible before <strong>February 1st, 2025.</strong> (Upgrade
instructions below).</p>
<p>If you are using pinned SHAs, please use the SHAs of versions
<code>v4.2.0</code> or <code>v3.4.0</code></p>
<p>If you do not upgrade, all workflow runs using any of the deprecated
<a href="https://github.com/actions/cache">actions/cache</a> will
fail.</p>
<p>Upgrading to the recommended versions will not break your
workflows.</p>
<h3>4.1.2</h3>
<ul>
<li>Add GitHub Enterprise Cloud instances hostname filters to inform API
endpoint choices - <a
href="https://redirect.github.com/actions/cache/pull/1474">#1474</a></li>
<li>Security fix: Bump braces from 3.0.2 to 3.0.3 - <a
href="https://redirect.github.com/actions/cache/pull/1475">#1475</a></li>
</ul>
<h3>4.1.1</h3>
<ul>
<li>Restore original behavior of <code>cache-hit</code> output - <a
href="https://redirect.github.com/actions/cache/pull/1467">#1467</a></li>
</ul>
<h3>4.1.0</h3>
<ul>
<li>Ensure <code>cache-hit</code> output is set when a cache is missed -
<a
href="https://redirect.github.com/actions/cache/pull/1404">#1404</a></li>
<li>Deprecate <code>save-always</code> input - <a
href="https://redirect.github.com/actions/cache/pull/1452">#1452</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0057852bfa"><code>0057852</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1655">#1655</a>
from actions/Link-/prepare-4.3.0</li>
<li><a
href="4f5ea67f1c"><code>4f5ea67</code></a>
Update licensed cache</li>
<li><a
href="9fcad95d03"><code>9fcad95</code></a>
Upgrade actions/cache to 4.1.0 and prepare 4.3.0 release</li>
<li><a
href="638ed79f9d"><code>638ed79</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1642">#1642</a>
from actions/GhadimiR-patch-1</li>
<li><a
href="3862dccb17"><code>3862dcc</code></a>
Add note on runner versions</li>
<li>See full diff in <a
href="0400d5f644...0057852bfa">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [tw-animate-css](https://github.com/Wombosvideo/tw-animate-css)
from 1.2.9 to 1.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/Wombosvideo/tw-animate-css/releases">tw-animate-css's
releases</a>.</em></p>
<blockquote>
<h2>v1.4.0</h2>
<h2>Changelog</h2>
<p>902e37a019ffd165ba078e0b3c02634526c54bf0: fix: remove support for
prefix, add new export for prefixed version. Closes <a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/58">#58</a>.
fab2a5bf817605be1976e159976718a83489fc1c: chore: bump version to 1.4.0
and update dependencies
c20dc32e2b532a8e74546879b4ce7d9ce89ba710: fix(build): make transform.ts
accept two arguments</p>
<h2>⚠️ BREAKING CHANGE ⚠️</h2>
<p>Support for Tailwind CSS's prefix option was moved to
<code>tw-animate-css/prefix</code> because it was breaking the
<code>--spacing</code> function. Users requiring prefixes should replace
their import:</p>
<pre lang="diff"><code>- import "tw-animate-css";
+ import "tw-animate-css/prefix";
</code></pre>
<p><em>I do not plan to introduce breaking changes like this to
non-major releases in the future. But because more people use spacing
rather than prefixes, reverting the previous version's (obviously
breaking) change seems reasonable.</em></p>
<h2>v1.3.8</h2>
<h2>Changelog</h2>
<ul>
<li>b5ff23a: fix: add support for global CSS variable prefix. Closes <a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/48">#48</a></li>
<li>03e5f12: feat: add support for ng-primitives height variables <a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/56">#56</a>
(thanks <a
href="https://github.com/immohammadjaved"><code>@immohammadjaved</code></a>)</li>
<li>b076cfb: docs: fix various issues in accordion and collapsible
docs</li>
<li>9485e33: chore: bump version to 1.3.8 and update dependencies</li>
</ul>
<h2>⚠️ BREAKING CHANGE ⚠️</h2>
<p>Adding support for prefixes broke custom spacing. It is recommended
that you skip this version if you do not use Tailwind CSS's prefix
option, and use v1.4.0 instead. If you are actually using prefixes, you
can use a special version supporting prefixes:</p>
<pre lang="diff"><code>- import "tw-animate-css"; /* Version
with spacing support */
+ import "tw-animate-css/prefix"; /* Version with prefix
support */
</code></pre>
<p><em>I do not plan to fix the incompatibility between the spacing and
prefix versions due to time constraints. Feel free to investigate and
open a pull request if you manage to fix it.</em></p>
<h2>v1.3.7</h2>
<h2>Changelog</h2>
<ul>
<li>80dbfcc: feat: add utilities for blur transitions <a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/54">#54</a>
(thanks <a
href="https://github.com/coffeeispower"><code>@coffeeispower</code></a>)</li>
<li>dc294f9: docs: add upcoming changes warning</li>
<li>c640bb8: chore: update dependencies and package manager version</li>
<li>9e63e34: chore: bump version to 1.3.7</li>
</ul>
<h2>v1.3.6</h2>
<h2>Changelog</h2>
<ul>
<li>58f3396: fix: allow changing animation parameters for ready-to-use
animations</li>
<li>8313476: chore: update dependencies nd package manager version</li>
<li>f81346c: chore: bump version to 1.3.6</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c20dc32e2b"><code>c20dc32</code></a>
fix(build): make transform.ts accept two arguments</li>
<li><a
href="fab2a5bf81"><code>fab2a5b</code></a>
chore: bump version to 1.4.0 and update dependencies</li>
<li><a
href="902e37a019"><code>902e37a</code></a>
fix: remove support for prefix, add new export for prefixed version</li>
<li><a
href="9485e33d99"><code>9485e33</code></a>
chore: bump version to 1.3.8 and update dependencies</li>
<li><a
href="b076cfb04a"><code>b076cfb</code></a>
docs: fix various issues in accordion and collapsible docs</li>
<li><a
href="03e5f12418"><code>03e5f12</code></a>
feat: add support for ng-primitives height variables (<a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/56">#56</a>)</li>
<li><a
href="b5ff23a0d5"><code>b5ff23a</code></a>
fix: add support for global CSS variable prefix. Closes <a
href="https://redirect.github.com/Wombosvideo/tw-animate-css/issues/48">#48</a></li>
<li><a
href="9e63e34286"><code>9e63e34</code></a>
chore: bump version to 1.3.7</li>
<li><a
href="c640bb8933"><code>c640bb8</code></a>
chore: update dependencies and package manager version</li>
<li><a
href="dc294f990a"><code>dc294f9</code></a>
docs: add upcoming changes warning</li>
<li>Additional commits viewable in <a
href="https://github.com/Wombosvideo/tw-animate-css/compare/v1.2.9...v1.4.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>