llama-stack-mirror/llama_stack/providers/inline/agents/meta_reference/responses
Ashwin Bharambe d47f2c0ba8 feat(responses)!: improve responses + conversations implementations
This PR updates the Conversation item related types and improves a
couple critical parts of the implemenation:

- it creates a streaming output item for the final assistant message output by
  the model. until now we only added content parts and included that
  message in the final response.

- rewrites the conversation update code completely to account for items
  other than messages (tool calls, outputs, etc.)
2025-10-14 14:42:12 -07:00
..
__init__.py chore(responses): Refactor Responses Impl to be civilized (#3138) 2025-08-15 00:05:35 +00:00
openai_responses.py feat(responses)!: improve responses + conversations implementations 2025-10-14 14:42:12 -07:00
streaming.py feat(responses)!: improve responses + conversations implementations 2025-10-14 14:42:12 -07:00
tool_executor.py feat(responses)!: add reasoning and annotation added events (#3793) 2025-10-11 16:47:14 -07:00
types.py feat: reuse previous mcp tool listings where possible (#3710) 2025-10-10 09:28:25 -07:00
utils.py feat(responses)!: add in_progress, failed, content part events (#3765) 2025-10-10 07:27:34 -07:00