llama-stack-mirror/tests/integration/responses
Sébastien Han d82a2cd6f8
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 4s
Integration Tests (Replay) / generate-matrix (push) Successful in 3s
API Conformance Tests / check-schema-compatibility (push) Successful in 10s
Python Package Build Test / build (3.13) (push) Successful in 17s
Python Package Build Test / build (3.12) (push) Successful in 18s
Test External API and Providers / test-external (venv) (push) Failing after 21s
Vector IO Integration Tests / test-matrix (push) Failing after 33s
UI Tests / ui-tests (22) (push) Successful in 1m13s
Unit Tests / unit-tests (3.12) (push) Failing after 1m37s
Unit Tests / unit-tests (3.13) (push) Failing after 2m11s
Pre-commit / pre-commit (22) (push) Successful in 3m39s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 4m1s
fix: httpcore deadlock in CI by properly closing streaming responses (#4335)
# What does this PR do?

The test_conversation_error_handling test was timing out in CI with a
deadlock in httpcore's connection pool. The root cause was the preceding
test_conversation_multi_turn_and_streaming test, which broke out of the
streaming response iterator early without properly closing the
underlying HTTP connection.

When a streaming response iterator is abandoned mid-stream, the HTTP
connection remains in an incomplete state. Since the openai_client
fixture is session-scoped, subsequent tests reuse the same httpcore
connection pool. The dangling connection causes the pool's internal lock
to deadlock when the next test attempts to acquire a new connection.

The fix wraps the streaming response in a context manager, which ensures
the connection is properly closed when exiting the with block, even when
breaking out of the loop early. This is a best practice when working
with streaming HTTP responses that may not be fully consumed.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-12-08 16:38:46 +01:00
..
fixtures fix(tests): add OpenAI client connection cleanup to prevent CI hangs (#4119) 2025-11-12 12:17:13 -05:00
recordings fix: Fix max_tool_calls for openai provider and add integration tests for the max_tool_calls feat (#4190) 2025-11-19 10:27:56 -08:00
__init__.py feat(tests): introduce a test "suite" concept to encompass dirs, options (#3339) 2025-09-05 13:58:49 -07:00
conftest.py feat(tests): enable MCP tests in server mode (#4146) 2025-11-13 07:23:23 -08:00
helpers.py feat(responses)!: improve responses + conversations implementations (#3810) 2025-10-15 09:36:11 -07:00
streaming_assertions.py feat(responses)!: add in_progress, failed, content part events (#3765) 2025-10-10 07:27:34 -07:00
test_basic_responses.py feat(tests): enable MCP tests in server mode (#4146) 2025-11-13 07:23:23 -08:00
test_conversation_responses.py fix: httpcore deadlock in CI by properly closing streaming responses (#4335) 2025-12-08 16:38:46 +01:00
test_file_search.py feat(tests): enable MCP tests in server mode (#4146) 2025-11-13 07:23:23 -08:00
test_mcp_authentication.py fix: MCP authorization parameter implementation (#4052) 2025-11-14 08:54:42 -08:00
test_tool_responses.py fix: Fix max_tool_calls for openai provider and add integration tests for the max_tool_calls feat (#4190) 2025-11-19 10:27:56 -08:00