llama-stack-mirror/llama_stack/providers/inline/agents/meta_reference
ehhuang cf422da825 fix: responses <> chat completion input conversion (#3645)
# What does this PR do?

closes #3268
closes #3498

When resuming from previous response ID, currently we attempt to convert
from the stored responses input to chat completion messages, which is
not always possible, e.g. for tool calls where some data is lost once
converted from chat completion message to repsonses input format.

This PR stores the chat completion messages that correspond to the
_last_ call to chat completion, which is sufficient to be resumed from
in the next responses API call, where we load these saved messages and
skip conversion entirely.

Separate issue to optimize storage:
https://github.com/llamastack/llama-stack/issues/3646

## Test Plan
existing CI tests
2025-10-02 21:50:13 -07:00
..
responses fix: responses <> chat completion input conversion (#3645) 2025-10-02 21:50:13 -07:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
agent_instance.py feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 21:50:13 -07:00
agents.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
config.py feat: add list responses API (#2233) 2025-05-23 13:16:48 -07:00
persistence.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
safety.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00