llama-stack-mirror/tests/integration/responses
Charlie Doern 72ee2b3548 fix(tests): add OpenAI client connection cleanup to prevent CI hangs
Add explicit connection cleanup and shorter timeouts to OpenAI client
fixtures. Fixes CI deadlock after 25+ tests due to connection pool
exhaustion. Also adds 60s timeout to test_conversation_context_loading
as safety net.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-11-10 20:16:28 -05:00
..
fixtures fix(tests): add OpenAI client connection cleanup to prevent CI hangs 2025-11-10 20:16:28 -05:00
recordings fix(ci): add recordings for responses suite due to web search type changing (#4104) 2025-11-07 10:42:07 -08:00
__init__.py feat(tests): introduce a test "suite" concept to encompass dirs, options (#3339) 2025-09-05 13:58:49 -07:00
helpers.py feat(responses)!: improve responses + conversations implementations (#3810) 2025-10-15 09:36:11 -07:00
streaming_assertions.py feat(responses)!: add in_progress, failed, content part events (#3765) 2025-10-10 07:27:34 -07:00
test_basic_responses.py fix(responses): fixes, re-record tests (#3820) 2025-10-15 16:37:42 -07:00
test_conversation_responses.py fix(tests): add OpenAI client connection cleanup to prevent CI hangs 2025-11-10 20:16:28 -05:00
test_file_search.py chore: Stack server no longer depends on llama-stack-client (#4094) 2025-11-07 09:54:09 -08:00
test_tool_responses.py chore: Stack server no longer depends on llama-stack-client (#4094) 2025-11-07 09:54:09 -08:00