llama-stack-mirror/llama_stack/providers/inline
Ben Browning 263eb6fd37 fix: Restore previous responses to input list, not messages
This adjusts the restoration of previous responses to prepend them to
the list of Responses API inputs instead of our converted list of Chat
Completion messages. This matches the expected behavior of the
Responses API, and I misinterpreted the nuances here in the initial implementation.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-02 11:56:39 -04:00
..
agents fix: Restore previous responses to input list, not messages 2025-05-02 11:56:39 -04:00
datasetio chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
eval chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
inference chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
safety chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
scoring chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
telemetry chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
tool_runtime fix: remove code interpeter implementation (#2087) 2025-05-01 14:35:08 -07:00
vector_io chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00