feat: add finetune_multi_device recipe with fsdp support

the HF SFTTrainer supports distributed training using FSDP.

Add a new recipe, `finetune_multi_device` which supports multi-GPU (cuda) training
using FSDP and optionally LoRA.

transformers hides _alot_ of their usage of FSDP behind the training args:
a6b51e7341/src/transformers/training_args.py (L1535)

you need to pass both `fsdp` and `fsdp_config` to get it to work properly. However,
it seems many of the `fsdp_config` entries are silently ignored. The key things to get this working were:
full_shard
offload (cpu offload)
transformer_layer_cls_to_wrap (model specific wrapping)
cpu_ram_efficient_loading
sharding_strategy
limit_all_gathers
sync_module_states
backward_prefetch
use_orig_params

these can be seen both in `fsdp=` and `fsdp_config=` int he `SFTConfig` call.

I have tested this with different model architectures with and without LoRA with success.

the user can now toggle `recipe` in their provider config between `single` and `multi` to access the two different recipes.

for debugging purposes NCCL logging settings can now be accessed via the provider config as well

Signed-off-by: Charlie Doern <cdoern@redhat.com>
This commit is contained in:
Charlie Doern 2025-05-19 13:21:35 -04:00
parent 35c2817d0a
commit 6494658a10
5 changed files with 1146 additions and 2 deletions

View file

@ -57,7 +57,7 @@ class HuggingFacePostTrainingConfig(BaseModel):
# L2 regularization coefficient # L2 regularization coefficient
# Helps prevent overfitting # Helps prevent overfitting
weight_decay: float = 0.01 weight_decay: float = 0.00
# Number of worker processes for data loading # Number of worker processes for data loading
# Higher values can improve data loading speed but increase memory usage # Higher values can improve data loading speed but increase memory usage
@ -67,6 +67,17 @@ class HuggingFacePostTrainingConfig(BaseModel):
# Can improve data transfer speed to GPU but uses more memory # Can improve data transfer speed to GPU but uses more memory
dataloader_pin_memory: bool = True dataloader_pin_memory: bool = True
# Recipe type for training (single or multi device)
recipe: str = "single"
# NCCL debug configuration for distributed training
# Enable detailed NCCL logging for debugging distributed training issues
enable_nccl_debug: bool = False
# NCCL subsystems to debug (NONE, ALL, INIT, COLL, P2P, SHM, NET)
# Controls which NCCL components generate debug output
nccl_debug_subsys: str = "NONE"
@classmethod @classmethod
def sample_run_config(cls, __distro_dir__: str, **kwargs: Any) -> dict[str, Any]: def sample_run_config(cls, __distro_dir__: str, **kwargs: Any) -> dict[str, Any]:
return {"checkpoint_format": "huggingface", "distributed_backend": None, "device": "cpu"} return {"checkpoint_format": "huggingface", "distributed_backend": None, "device": "cpu", "recipe": "single"}

View file

@ -22,6 +22,7 @@ from llama_stack.apis.post_training import (
from llama_stack.providers.inline.post_training.huggingface.config import ( from llama_stack.providers.inline.post_training.huggingface.config import (
HuggingFacePostTrainingConfig, HuggingFacePostTrainingConfig,
) )
from llama_stack.providers.inline.post_training.huggingface.recipes.finetune_multi_device import HFFinetuningMultiDevice
from llama_stack.providers.inline.post_training.huggingface.recipes.finetune_single_device import ( from llama_stack.providers.inline.post_training.huggingface.recipes.finetune_single_device import (
HFFinetuningSingleDevice, HFFinetuningSingleDevice,
) )
@ -88,6 +89,14 @@ class HuggingFacePostTrainingImpl:
datasetio_api=self.datasetio_api, datasetio_api=self.datasetio_api,
datasets_api=self.datasets_api, datasets_api=self.datasets_api,
) )
if self.config.recipe == "multi":
recipe = HFFinetuningMultiDevice(
job_uuid=job_uuid,
datasetio_api=self.datasetio_api,
datasets_api=self.datasets_api,
enable_nccl_debug=self.config.enable_nccl_debug,
nccl_debug_subsys=self.config.nccl_debug_subsys,
)
resources_allocated, checkpoints = await recipe.train( resources_allocated, checkpoints = await recipe.train(
model=model, model=model,

View file

@ -91,6 +91,7 @@ providers:
checkpoint_format: huggingface checkpoint_format: huggingface
distributed_backend: null distributed_backend: null
device: cpu device: cpu
recipe: single
tool_runtime: tool_runtime:
- provider_id: brave-search - provider_id: brave-search
provider_type: remote::brave-search provider_type: remote::brave-search

View file

@ -89,6 +89,7 @@ providers:
checkpoint_format: huggingface checkpoint_format: huggingface
distributed_backend: null distributed_backend: null
device: cpu device: cpu
recipe: single
tool_runtime: tool_runtime:
- provider_id: brave-search - provider_id: brave-search
provider_type: remote::brave-search provider_type: remote::brave-search