mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-27 21:31:59 +00:00
This adds storing of input items with previous responses and then restores those input items to prepend to the user's messages list when using conversation state. I missed this in the initial implementation, but it makes sense that we have to store the input items from previous responses so that we can reconstruct the proper messages stack for multi-turn conversations - just the output from previous responses isn't enough context for the models to follow the turns and the original instructions. Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| agents.py | ||
| openai_responses.py | ||