llama-stack-mirror/tests/integration/responses
Ashwin Bharambe a7df687167 feat(tests): enable MCP tests in server mode
We would like to run all OpenAI compatibility tests using only the openai-client library. This is most friendly for contributors since they can run tests without needing to update the client-sdks (which is getting easier but still a long pole.)

This is the first step in enabling that -- no using "library client" for any of the Responses tests. This seems like a reasonable trade-off since the usage of an embeddeble library client for Responses (or any OpenAI-compatible) behavior seems to be not very common. To do this, we needed to enable MCP tests (which only worked in library client mode) for server mode.
2025-11-12 20:20:38 -08:00
..
fixtures fix(tests): add OpenAI client connection cleanup to prevent CI hangs (#4119) 2025-11-12 12:17:13 -05:00
recordings feat(tests): enable MCP tests in server mode 2025-11-12 20:20:38 -08:00
__init__.py feat(tests): introduce a test "suite" concept to encompass dirs, options (#3339) 2025-09-05 13:58:49 -07:00
conftest.py feat(tests): enable MCP tests in server mode 2025-11-12 20:20:38 -08:00
helpers.py feat(responses)!: improve responses + conversations implementations (#3810) 2025-10-15 09:36:11 -07:00
streaming_assertions.py feat(responses)!: add in_progress, failed, content part events (#3765) 2025-10-10 07:27:34 -07:00
test_basic_responses.py feat(tests): enable MCP tests in server mode 2025-11-12 20:20:38 -08:00
test_conversation_responses.py feat(tests): enable MCP tests in server mode 2025-11-12 20:20:38 -08:00
test_file_search.py feat(tests): enable MCP tests in server mode 2025-11-12 20:20:38 -08:00
test_tool_responses.py feat(tests): enable MCP tests in server mode 2025-11-12 20:20:38 -08:00