llama-stack-mirror/docs/source/providers/post_training/inline_huggingface.md
Sébastien Han c9a49a80e8
docs: auto generated documentation for providers (#2543)
# What does this PR do?

Simple approach to get some provider pages in the docs.

Add or update description fields in the provider configuration class
using Pydantic’s Field, ensuring these descriptions are clear and
complete, as they will be used to auto-generate provider documentation
via ./scripts/distro_codegen.py instead of editing the docs manually.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-30 15:13:20 +02:00

1.2 KiB

inline::huggingface

Description

HuggingFace-based post-training provider for fine-tuning models using the HuggingFace ecosystem.

Configuration

Field Type Required Default Description
device <class 'str'> No cuda
distributed_backend Literal['fsdp', 'deepspeed' No
checkpoint_format Literal['full_state', 'huggingface' No huggingface
chat_template <class 'str'> No < user
{input}
< assistant >
{output}
model_specific_config <class 'dict'> No {'trust_remote_code': True, 'attn_implementation': 'sdpa'}
max_seq_length <class 'int'> No 2048
gradient_checkpointing <class 'bool'> No False
save_total_limit <class 'int'> No 3
logging_steps <class 'int'> No 10
warmup_ratio <class 'float'> No 0.1
weight_decay <class 'float'> No 0.01
dataloader_num_workers <class 'int'> No 4
dataloader_pin_memory <class 'bool'> No True

Sample Configuration

checkpoint_format: huggingface
distributed_backend: null
device: cpu