llama-stack-mirror/llama_stack/providers/inline
Ben Browning b90bb66f28 fix: Restore previous responses to input list, not messages
This adjusts the restoration of previous responses to prepend them to
the list of Responses API inputs instead of our converted list of Chat
Completion messages. This matches the expected behavior of the
Responses API, and I misinterpreted the nuances here in the initial implementation.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-08 07:03:47 -04:00
..
agents fix: Restore previous responses to input list, not messages 2025-05-08 07:03:47 -04:00
datasetio feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
eval feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
inference chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training fix: Don't require efficiency_config for torchtune (#2104) 2025-05-06 09:50:44 -07:00
safety chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
scoring chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
telemetry feat: add metrics query API (#1394) 2025-05-07 10:11:26 -07:00
tool_runtime fix: remove code interpeter implementation (#2087) 2025-05-01 14:35:08 -07:00
vector_io feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00