mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-27 23:22:01 +00:00
the HF SFTTrainer supports distributed training using FSDP.
Add a new recipe, `finetune_multi_device` which supports multi-GPU (cuda) training
using FSDP and optionally LoRA.
transformers hides _alot_ of their usage of FSDP behind the training args:
|
||
|---|---|---|
| .. | ||
| agents | ||
| datasetio | ||
| eval | ||
| files/localfs | ||
| inference | ||
| ios/inference | ||
| post_training | ||
| safety | ||
| scoring | ||
| telemetry | ||
| tool_runtime | ||
| vector_io | ||
| __init__.py | ||