llama-stack-mirror/llama_stack/providers
Charlie Doern 46c5b14a22 feat: handle graceful shutdown
currently this impl hangs because of `trainer.train()` blocking.

Re-write the implementation to kick off the model download, device instantiation, dataset processing, and training in a monitored subprocess.

All of these steps need to be in a subprocess or else different devices are used which causes torch errors.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-05-16 16:41:24 -04:00
..
inline feat: handle graceful shutdown 2025-05-16 16:41:24 -04:00
registry feat: add huggingface post_training impl 2025-05-16 16:37:30 -04:00
remote feat: use openai-python for openai inference provider (#2193) 2025-05-16 12:57:56 -07:00
utils fix: multiple tool calls in remote-vllm chat_completion (#2161) 2025-05-15 11:23:29 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00