llama-stack-mirror/docs/static
Luis Tomas Bolivar 63422e5b36
Some checks failed
Python Package Build Test / build (3.12) (push) Failing after 8s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 3s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 5s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 6s
Python Package Build Test / build (3.13) (push) Failing after 6s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 10s
Unit Tests / unit-tests (3.13) (push) Failing after 14s
Unit Tests / unit-tests (3.12) (push) Failing after 19s
Test External API and Providers / test-external (venv) (push) Failing after 1m3s
Vector IO Integration Tests / test-matrix (push) Failing after 1m6s
API Conformance Tests / check-schema-compatibility (push) Successful in 1m17s
UI Tests / ui-tests (22) (push) Successful in 1m18s
Pre-commit / pre-commit (push) Successful in 3m5s
fix!: Enhance response API support to not fail with tool calling (#3385)
# What does this PR do?
Introduces two main fixes to enhance the stability of Responses API when
dealing with tool calling responses and structured outputs.

### Changes Made

1. It added OpenAIResponseOutputMessageMCPCall and ListTools to
OpenAIResponseInput but
https://github.com/llamastack/llama-stack/pull/3810 got merge that did
the same in a different way. Still this PR does it in a way that keep
the sync between OpenAIResponsesOutput and the allowed objects in
OpenAIResponseInput.

2. Add protection in case self.ctx.response_format does not have type
attribute

BREAKING CHANGE: OpenAIResponseInput now uses OpenAIResponseOutput union
type.
This is semantically equivalent - all previously accepted types are
still supported
via the OpenAIResponseOutput union. This improves type consistency and
maintainability.
2025-10-27 09:33:02 -07:00
..
img docs: update OG image (#3669) 2025-10-03 10:22:54 -07:00
providers/vector_io docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
deprecated-llama-stack-spec.html fix!: Enhance response API support to not fail with tool calling (#3385) 2025-10-27 09:33:02 -07:00
deprecated-llama-stack-spec.yaml fix!: Enhance response API support to not fail with tool calling (#3385) 2025-10-27 09:33:02 -07:00
experimental-llama-stack-spec.html fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-27 09:27:21 -07:00
experimental-llama-stack-spec.yaml fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-27 09:27:21 -07:00
llama-stack-spec.html fix!: Enhance response API support to not fail with tool calling (#3385) 2025-10-27 09:33:02 -07:00
llama-stack-spec.yaml fix!: Enhance response API support to not fail with tool calling (#3385) 2025-10-27 09:33:02 -07:00
remote_or_local.gif docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
safety_system.webp docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
site.webmanifest docs: add favicon and mobile styling (#3650) 2025-10-02 10:42:54 +02:00
stainless-llama-stack-spec.html fix!: Enhance response API support to not fail with tool calling (#3385) 2025-10-27 09:33:02 -07:00
stainless-llama-stack-spec.yaml fix!: Enhance response API support to not fail with tool calling (#3385) 2025-10-27 09:33:02 -07:00