llama-stack-mirror/llama_stack/providers
Ben Browning 5b2e850754 fix: Responses API previous_response input items
This adds storing of input items with previous responses and then
restores those input items to prepend to the user's messages list when
using conversation state.

I missed this in the initial implementation, but it makes sense that we
have to store the input items from previous responses so that we can
reconstruct the proper messages stack for multi-turn conversations -
just the output from previous responses isn't enough context for the
models to follow the turns and the original instructions.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-08 06:58:43 -04:00
..
inline fix: Responses API previous_response input items 2025-05-08 06:58:43 -04:00
registry feat(providers): sambanova updated to use LiteLLM openai-compat (#1596) 2025-05-06 16:50:22 -07:00
remote feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
tests chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
utils feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00