llama-stack-mirror/tests/unit/distribution
Matthew Farrellee 01bdcce4d2
chore(recorder): update mocks to be closer to non-mock environment (#3442)
# What does this PR do?

the @required_args decorator in openai-python is masking the async
nature of the {AsyncCompletions,chat.AsyncCompletions}.create method.
see https://github.com/openai/openai-python/issues/996

this means two things -

 0. we cannot use iscoroutine in the recorder to detect async vs non
 1. our mocks are inappropriately introducing identifiable async

for (0), we update the iscoroutine check w/ detection of /v1/models,
which is the only non-async function we mock & record.

for (1), we could leave everything as is and assume (0) will catch
errors. to be defensive, we update the unit tests to mock below create
methods, allowing the true openai-python create() methods to be tested.
2025-09-15 15:25:53 -04:00
..
routers feat!: Migrate Vector DB IDs to Vector Store IDs (breaking change) (#3253) 2025-09-05 15:40:34 +02:00
routing_tables feat!: Migrate Vector DB IDs to Vector Store IDs (breaking change) (#3253) 2025-09-05 15:40:34 +02:00
test_build_path.py chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
test_context.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
test_distribution.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
test_inference_recordings.py chore(recorder): update mocks to be closer to non-mock environment (#3442) 2025-09-15 15:25:53 -04:00
test_library_client_initialization.py feat: Remove initialize() Method from LlamaStackAsLibrary (#2979) 2025-08-21 15:59:04 -07:00