llama-stack-mirror/llama_stack/providers/inline
Ben Browning 4df8caab41 Fixes for multi-turn tool calls in Responses API
Testing with Codex locally, I found another issue in how we were
plumbing through tool calls in multi-turn scenarios and the way tool
call inputs and outputs from previous turns were passed back into
future turns.

This led me to realize we were missing the function tool call output
type in the Responses API, so this adds that and plumbs handling of it
through the responses API to chat completion conversion code.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-08 16:21:15 -04:00
..
agents Fixes for multi-turn tool calls in Responses API 2025-05-08 16:21:15 -04:00
datasetio feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
eval feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
inference chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training fix: Don't require efficiency_config for torchtune (#2104) 2025-05-06 09:50:44 -07:00
safety chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
scoring chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
telemetry feat: add metrics query API (#1394) 2025-05-07 10:11:26 -07:00
tool_runtime fix: remove code interpeter implementation (#2087) 2025-05-01 14:35:08 -07:00
vector_io feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00