mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-12 16:16:09 +00:00
rather than handling multi-GPU training within a recipe, distributed training should be one of our scheduler offerings. Introduce the DistributedJobScheduler which kicks off a `finetune_handler.py` script using torchrun. This handler processes the training args via argparse and calls the right recipe as `post_training.py` used to do. Torchrun takes care of env variables like world_size, local_rank, etc. Signed-off-by: Charlie Doern <cdoern@redhat.com> |
||
---|---|---|
.. | ||
agents | ||
datasetio | ||
eval | ||
files/localfs | ||
inference | ||
ios/inference | ||
post_training | ||
safety | ||
scoring | ||
telemetry | ||
tool_runtime | ||
vector_io | ||
__init__.py |