llama-stack-mirror/tests/unit/providers/inference
2025-03-17 20:43:39 -04:00
..
test_remote_vllm.py test: Bump slow_callback_duration to 200ms to avoid flaky test_chat_completion_doesnt_block_event_loop 2025-03-17 20:43:39 -04:00