llama-stack-mirror/llama_stack/providers/inline/agents/meta_reference
Ben Browning 5b2e850754 fix: Responses API previous_response input items
This adds storing of input items with previous responses and then
restores those input items to prepend to the user's messages list when
using conversation state.

I missed this in the initial implementation, but it makes sense that we
have to store the input items from previous responses so that we can
reconstruct the proper messages stack for multi-turn conversations -
just the output from previous responses isn't enough context for the
models to follow the turns and the original instructions.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-08 06:58:43 -04:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
agent_instance.py feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
agents.py feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
config.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
openai_responses.py fix: Responses API previous_response input items 2025-05-08 06:58:43 -04:00
persistence.py feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
safety.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00