llama-stack-mirror/llama_stack
Ben Browning 4df8caab41 Fixes for multi-turn tool calls in Responses API
Testing with Codex locally, I found another issue in how we were
plumbing through tool calls in multi-turn scenarios and the way tool
call inputs and outputs from previous turns were passed back into
future turns.

This led me to realize we were missing the function tool call output
type in the Responses API, so this adds that and plumbs handling of it
through the responses API to chat completion conversion code.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-08 16:21:15 -04:00
..
apis Fixes for multi-turn tool calls in Responses API 2025-05-08 16:21:15 -04:00
cli chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
distribution feat: add function tools to openai responses 2025-05-08 07:03:47 -04:00
models fix: llama4 tool use prompt fix (#2103) 2025-05-06 22:18:31 -07:00
providers Fixes for multi-turn tool calls in Responses API 2025-05-08 16:21:15 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00