llama-stack-mirror/tests/integration/responses
Omar Abdelwahab e13014be23 test: Add skip marker for MCP auth tests in replay mode
Analysis of CI server logs revealed that tests with authorization parameter
create different OpenAI request hashes than existing MCP tool tests, requiring
separate recordings.

Server log showed:
- RuntimeError: Recording not found for request hash: 56ddb450d...
- Tests with authorization need their own recordings for replay mode

Since recordings cannot be generated locally (dev server network constraints)
and require proper CI infrastructure with OpenAI API access, adding skip marker
until recordings can be generated in CI record mode.

Tests pass when run with actual OpenAI API key in record mode.
2025-11-13 19:52:27 -08:00
..
fixtures fix(tests): add OpenAI client connection cleanup to prevent CI hangs (#4119) 2025-11-12 12:17:13 -05:00
recordings feat: split API and provider specs into separate llama-stack-api pkg (#3895) 2025-11-13 11:51:17 -08:00
__init__.py feat(tests): introduce a test "suite" concept to encompass dirs, options (#3339) 2025-09-05 13:58:49 -07:00
conftest.py feat(tests): enable MCP tests in server mode (#4146) 2025-11-13 07:23:23 -08:00
helpers.py feat(responses)!: improve responses + conversations implementations (#3810) 2025-10-15 09:36:11 -07:00
streaming_assertions.py feat(responses)!: add in_progress, failed, content part events (#3765) 2025-10-10 07:27:34 -07:00
test_basic_responses.py feat(tests): enable MCP tests in server mode (#4146) 2025-11-13 07:23:23 -08:00
test_conversation_responses.py test: Add timeout to test_conversation_error_handling to prevent CI hang 2025-11-13 18:46:27 -08:00
test_file_search.py feat(tests): enable MCP tests in server mode (#4146) 2025-11-13 07:23:23 -08:00
test_mcp_authentication.py test: Add skip marker for MCP auth tests in replay mode 2025-11-13 19:52:27 -08:00
test_tool_responses.py Merge branch 'main' into add-mcp-authentication-param 2025-11-13 09:42:35 -08:00