llama-stack-mirror/llama_stack
Ihar Hrachyshka db21eab713
fix: catch TimeoutError in place of asyncio.TimeoutError (#2131)
# What does this PR do?

As per docs [1], since python 3.11 wait_for() raises TimeoutError. Since
we currently support python 3.10+, we have to catch both.

[1]:
https://docs.python.org/3.12/library/asyncio-task.html#asyncio.wait_for

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

No explicit testing; just code hardening to reflect docs.

[//]: # (## Documentation)

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-05-12 11:49:59 +02:00
..
apis feat: add metrics query API (#1394) 2025-05-07 10:11:26 -07:00
cli chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
distribution fix: catch TimeoutError in place of asyncio.TimeoutError (#2131) 2025-05-12 11:49:59 +02:00
models fix: llama4 tool use prompt fix (#2103) 2025-05-06 22:18:31 -07:00
providers fix: raise an error when no vector DB IDs are provided to the RAG tool (#1911) 2025-05-12 11:25:13 +02:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates fix: revert "feat(provider): adding llama4 support in together inference provider (#2123)" (#2124) 2025-05-08 15:18:16 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00