mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 19:04:19 +00:00
fix: remote-vllm event loop blocking unit test on Mac
The remote-vllm `test_chat_completion_doesnt_block_event_loop` unit test was often failing for me on a Mac. I traced this back to the swap to the AsyncOpenAI client in the remote-vllm provider as where this started, and it looks like the async client needs a bit more accurate HTTP request handling from our mock server. So, this fixes that unit test to send proper Content-Type and Content-Length headers which makes the AsyncOpenAI client happier on Macs. All the test_remote_vllm.py unit tests consistently pass for me on a Mac now, without any flaking in the event loop one. `pytest -s -v tests/unit/providers/inference/test_remote_vllm.py` Signed-off-by: Ben Browning <bbrownin@redhat.com>
This commit is contained in:
parent
c7be73fb16
commit
f586bdd912
1 changed files with 4 additions and 1 deletions
|
@ -69,9 +69,12 @@ class MockInferenceAdapterWithSleep:
|
|||
# ruff: noqa: N802
|
||||
def do_POST(self):
|
||||
time.sleep(sleep_time)
|
||||
response_body = json.dumps(response).encode("utf-8")
|
||||
self.send_response(code=200)
|
||||
self.send_header("Content-Type", "application/json")
|
||||
self.send_header("Content-Length", len(response_body))
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(response).encode("utf-8"))
|
||||
self.wfile.write(response_body)
|
||||
|
||||
self.request_handler = DelayedRequestHandler
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue