llama-stack-mirror/llama_stack
Ilya Kolchinsky deee355952
fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop (#1969)
# What does this PR do?
Closes #1968.

The asynchronous client in `VLLMInferenceAdapter` is now initialized
directly before first use and not in `VLLMInferenceAdapter.initialize`.
This prevents issues arising due to accessing an expired event loop from
a completed `asyncio.run`.


## Test Plan
Ran unit tests, including `test_remote_vllm.py`.
Ran the code snippet mentioned in #1968.

---------

Co-authored-by: Sébastien Han <seb@redhat.com>
2025-04-23 15:33:19 +02:00
..
apis feat(agents): add agent naming functionality (#1922) 2025-04-17 07:02:47 -07:00
cli feat: allow building distro with external providers (#1967) 2025-04-18 17:18:28 +02:00
distribution feat: Hide tool output under an expander in Playground UI (#2003) 2025-04-23 15:32:12 +02:00
models fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
providers fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop (#1969) 2025-04-23 15:33:19 +02:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates feat: Update NVIDIA to GA docs; remove notebook reference until ready (#1999) 2025-04-18 19:13:18 -04:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00