llama-stack-mirror/llama_stack/testing
Matthew Farrellee ee79812da2 chore(recorder): update mocks to be closer to non-mock environment
the @required_args decorator in openai-python is masking the async nature
of the {AsyncCompletions,chat.AsyncCompletions}.create method.
see https://github.com/openai/openai-python/issues/996

this means two things -
 0. we cannot use iscoroutine in the recorder to detect async vs non
 1. our mocks are inappropriately introducing identifiable async

for (0), we update the iscoroutine check w/ detection of /v1/models,
which is the only non-async function we mock & record.

for (1), we could leave everything as is and assume (0) will catch errors.
to be defensive, we update the unit tests to mock below create methods,
allowing the true openai-python create() methods to be tested.
2025-09-14 07:04:44 -04:00
..
__init__.py feat(tests): introduce inference record/replay to increase test reliability (#2941) 2025-07-29 12:41:31 -07:00
inference_recorder.py chore(recorder): update mocks to be closer to non-mock environment 2025-09-14 07:04:44 -04:00