llama-stack-mirror/llama_stack/providers/inline/agents/meta_reference
ashwinb 47d5af703c
chore(responses): Refactor Responses Impl to be civilized (#3138)
# What does this PR do?
Refactors the OpenAI responses implementation by extracting streaming and tool execution logic into separate modules. This improves code organization by:

1. Creating a new `StreamingResponseOrchestrator` class in `streaming.py` to handle the streaming response generation logic
2. Moving tool execution functionality to a dedicated `ToolExecutor` class in `tool_executor.py`

## Test Plan

Existing tests
2025-08-15 00:05:35 +00:00
..
responses chore(responses): Refactor Responses Impl to be civilized (#3138) 2025-08-15 00:05:35 +00:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
agent_instance.py chore: standardize session not found error (#3031) 2025-08-04 13:12:02 -07:00
agents.py chore(responses): Refactor Responses Impl to be civilized (#3138) 2025-08-15 00:05:35 +00:00
config.py feat: add list responses API (#2233) 2025-05-23 13:16:48 -07:00
persistence.py fix: Fix list_sessions() (#3114) 2025-08-13 07:46:26 -07:00
safety.py chore(api): add mypy coverage to meta_reference_safety (#2661) 2025-07-09 10:22:34 +02:00