llama-stack-mirror/tests/unit/providers
Ben Browning f586bdd912 fix: remote-vllm event loop blocking unit test on Mac
The remote-vllm `test_chat_completion_doesnt_block_event_loop` unit
test was often failing for me on a Mac. I traced this back to the swap
to the AsyncOpenAI client in the remote-vllm provider as where this
started, and it looks like the async client needs a bit more accurate
HTTP request handling from our mock server.

So, this fixes that unit test to send proper Content-Type and
Content-Length headers which makes the AsyncOpenAI client happier on Macs.

All the test_remote_vllm.py unit tests consistently pass for me on a Mac
now, without any flaking in the event loop one.

`pytest -s -v tests/unit/providers/inference/test_remote_vllm.py`

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-06-02 08:36:35 -04:00
..
agent feat: add list responses API (#2233) 2025-05-23 13:16:48 -07:00
agents fix(responses): use input, not original_input when storing the Response (#2300) 2025-05-28 13:17:48 -07:00
inference fix: remote-vllm event loop blocking unit test on Mac 2025-06-02 08:36:35 -04:00
nvidia fix: Pass model parameter as config name to NeMo Customizer (#2218) 2025-05-20 09:51:39 -07:00
utils fix: add check for interleavedContent (#1973) 2025-05-06 09:55:07 -07:00
vector_io feat: Enable ingestion of precomputed embeddings (#2317) 2025-05-31 04:03:37 -06:00
test_configs.py feat(api): don't return a payload on file delete (#1640) 2025-03-25 17:12:36 -07:00