llama-stack-mirror/tests/integration/responses
Ashwin Bharambe cb40da210f
fix: update tests for OpenAI-style models endpoint (#4053)
The llama-stack-client now uses /`v1/openai/v1/models` which returns
OpenAI-compatible model objects with 'id' and 'custom_metadata' fields
instead of the Resource-style 'identifier' field. Updated api_recorder
to handle the new endpoint and modified tests to access model metadata
appropriately. Deleted stale model recordings for re-recording.

**NOTE: CI will be red on this one since it is dependent on
https://github.com/llamastack/llama-stack-client-python/pull/291/files
landing. I verified locally that it is green.**
2025-11-03 17:30:08 -08:00
..
fixtures feat(responses)!: improve responses + conversations implementations (#3810) 2025-10-15 09:36:11 -07:00
recordings fix: update tests for OpenAI-style models endpoint (#4053) 2025-11-03 17:30:08 -08:00
__init__.py feat(tests): introduce a test "suite" concept to encompass dirs, options (#3339) 2025-09-05 13:58:49 -07:00
helpers.py feat(responses)!: improve responses + conversations implementations (#3810) 2025-10-15 09:36:11 -07:00
streaming_assertions.py feat(responses)!: add in_progress, failed, content part events (#3765) 2025-10-10 07:27:34 -07:00
test_basic_responses.py fix(responses): fixes, re-record tests (#3820) 2025-10-15 16:37:42 -07:00
test_conversation_responses.py fix(responses): use conversation items when no stored messages exist (#3819) 2025-10-15 14:43:44 -07:00
test_file_search.py feat(responses)!: add reasoning and annotation added events (#3793) 2025-10-11 16:47:14 -07:00
test_tool_responses.py fix(ci): enable responses tests in CI; suppress expected MCP auth error logs (#3889) 2025-10-22 14:59:42 -07:00