mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-28 06:40:24 +00:00
the HF SFTTrainer supports distributed training using FSDP.
Add a new recipe, `finetune_multi_device` which supports multi-GPU (cuda) training
using FSDP and optionally LoRA.
transformers hides _alot_ of their usage of FSDP behind the training args:
|
||
|---|---|---|
| .. | ||
| common | ||
| huggingface | ||
| torchtune | ||
| __init__.py | ||