llama-stack-mirror/llama_stack/providers
Ilya Kolchinsky deee355952
fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop (#1969)
# What does this PR do?
Closes #1968.

The asynchronous client in `VLLMInferenceAdapter` is now initialized
directly before first use and not in `VLLMInferenceAdapter.initialize`.
This prevents issues arising due to accessing an expired event loop from
a completed `asyncio.run`.


## Test Plan
Ran unit tests, including `test_remote_vllm.py`.
Ran the code snippet mentioned in #1968.

---------

Co-authored-by: Sébastien Han <seb@redhat.com>
2025-04-23 15:33:19 +02:00
..
inline fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
registry fix: use torchao 0.8.0 for inference (#1925) 2025-04-10 13:39:20 -07:00
remote fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop (#1969) 2025-04-23 15:33:19 +02:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: add health to all providers through providers endpoint (#1418) 2025-04-14 11:59:36 +02:00