llama-stack-mirror/tests
ashwinb 47d5af703c
chore(responses): Refactor Responses Impl to be civilized (#3138)
# What does this PR do?
Refactors the OpenAI responses implementation by extracting streaming and tool execution logic into separate modules. This improves code organization by:

1. Creating a new `StreamingResponseOrchestrator` class in `streaming.py` to handle the streaming response generation logic
2. Moving tool execution functionality to a dedicated `ToolExecutor` class in `tool_executor.py`

## Test Plan

Existing tests
2025-08-15 00:05:35 +00:00
..
common chore(tests): fix responses and vector_io tests (#3119) 2025-08-12 16:15:53 -07:00
containers feat(ci): add support for running vision inference tests (#2972) 2025-07-31 11:50:42 -07:00
external chore: bump min python version in docs and tests (#3103) 2025-08-12 08:52:57 -07:00
integration Revert "feat: add batches API with OpenAI compatibility" (#3149) 2025-08-14 10:08:54 -07:00
unit chore(responses): Refactor Responses Impl to be civilized (#3138) 2025-08-15 00:05:35 +00:00
__init__.py refactor(test): introduce --stack-config and simplify options (#1404) 2025-03-05 17:02:02 -08:00
README.md docs: revamp testing documentation (#2155) 2025-05-13 11:28:29 -07:00

Llama Stack Tests

Llama Stack has multiple layers of testing done to ensure continuous functionality and prevent regressions to the codebase.

Testing Type Details
Unit unit/README.md
Integration integration/README.md
Verification verifications/README.md