llama-stack-mirror/llama_stack/providers/inline/post_training/huggingface
Charlie Doern 6c3a40e3d2 feat: add huggingface post_training impl
adds an inline HF SFTTrainer provider. Alongside touchtune -- this is a super popular option for running training jobs. The config allows a user to specify some key fields such as a model, chat_template, device, etc

the provider comes with one recipe `finetune_single_device` which works both with and without LoRA.

any model that is a valid HF identifier can be given and the model will be pulled.

this has been tested so far with CPU and MPS device types, but should be compatible with CUDA out of the box

The provider processes the given dataset into the proper format, established the various steps per epoch, steps per save, steps per eval, sets a sane SFTConfig, and runs n_epochs of training

if checkpoint_dir is none, no model is saved. If there is a checkpoint dir, a model is saved every `save_steps` and at the end of training.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-05-16 16:37:30 -04:00
..
recipes feat: add huggingface post_training impl 2025-05-16 16:37:30 -04:00
__init__.py feat: add huggingface post_training impl 2025-05-16 16:37:30 -04:00
config.py feat: add huggingface post_training impl 2025-05-16 16:37:30 -04:00
post_training.py feat: add huggingface post_training impl 2025-05-16 16:37:30 -04:00