llama-stack-mirror/llama_stack/providers/utils
Ihar Hrachyshka 2433ef218d feat: implement async job scheduler for torchtune
Now a separate thread is started to execute training jobs. Training
requests now return job ID before the job completes. (Which fixes API
timeouts for any jobs that take longer than a minute.)

Note: the scheduler code is meant to be spun out in the future into a
common provider service that can be reused for different APIs and
providers. It is also expected to back the /jobs API proposed here:

https://github.com/meta-llama/llama-stack/discussions/1238

Hence its somewhat generalized form which is expected to simplify its
adoption elsewhere in the future.

Note: this patch doesn't attempt to implement missing APIs (e.g. cancel
or job removal). This work will belong to follow-up PRs.

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-28 12:11:59 -04:00
..
bedrock Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
common feat: [new open benchmark] BFCL_v3 (#1578) 2025-03-14 12:50:49 -07:00
datasetio fix: Call pandas.read_* in a seperate thread (#1698) 2025-03-19 10:46:37 -07:00
inference feat: Support "stop" parameter in remote:vLLM (#1715) 2025-03-24 12:42:55 -07:00
kvstore chore: made inbuilt tools blocking calls into async non blocking calls (#1509) 2025-03-09 16:59:24 -07:00
memory fix(deps): move chardet and pypdf imports inline where used (#1434) 2025-03-06 17:09:14 -08:00
scoring feat: [New Eval Benchamark] IfEval (#1708) 2025-03-19 16:39:59 -07:00
telemetry feat: use same trace ids in stack and otel (#1759) 2025-03-21 15:41:26 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
scheduler.py feat: implement async job scheduler for torchtune 2025-03-28 12:11:59 -04:00