mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-27 19:02:01 +00:00
the HF SFTTrainer supports distributed training using FSDP.
Add a new recipe, `finetune_multi_device` which supports multi-GPU (cuda) training
using FSDP and optionally LoRA.
transformers hides _alot_ of their usage of FSDP behind the training args:
|
||
|---|---|---|
| .. | ||
| apis | ||
| cli | ||
| distribution | ||
| models | ||
| providers | ||
| strong_typing | ||
| templates | ||
| ui | ||
| __init__.py | ||
| env.py | ||
| log.py | ||
| schema_utils.py | ||