llama-stack-mirror/tests/unit/providers
ashwinb 47d5af703c
chore(responses): Refactor Responses Impl to be civilized (#3138)
# What does this PR do?
Refactors the OpenAI responses implementation by extracting streaming and tool execution logic into separate modules. This improves code organization by:

1. Creating a new `StreamingResponseOrchestrator` class in `streaming.py` to handle the streaming response generation logic
2. Moving tool execution functionality to a dedicated `ToolExecutor` class in `tool_executor.py`

## Test Plan

Existing tests
2025-08-15 00:05:35 +00:00
..
agent fix: Fix list_sessions() (#3114) 2025-08-13 07:46:26 -07:00
agents chore(responses): Refactor Responses Impl to be civilized (#3138) 2025-08-15 00:05:35 +00:00
inference feat: Add clear error message when API key is missing (#2992) 2025-07-31 16:33:16 -04:00
nvidia chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
utils fix: use ChatCompletionMessageFunctionToolCall (#3142) 2025-08-14 10:27:00 -07:00
vector_io feat: Implement hybrid search in Milvus (#2644) 2025-08-07 09:42:03 +02:00
test_configs.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00