mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
42 lines
1.6 KiB
Text
42 lines
1.6 KiB
Text
---
|
|
description: "HuggingFace-based post-training provider for fine-tuning models using the HuggingFace ecosystem."
|
|
sidebar_label: Huggingface-Gpu
|
|
title: inline::huggingface-gpu
|
|
---
|
|
|
|
# inline::huggingface-gpu
|
|
|
|
## Description
|
|
|
|
HuggingFace-based post-training provider for fine-tuning models using the HuggingFace ecosystem.
|
|
|
|
## Configuration
|
|
|
|
| Field | Type | Required | Default | Description |
|
|
|-------|------|----------|---------|-------------|
|
|
| `device` | `str` | No | cuda | |
|
|
| `distributed_backend` | `Literal[fsdp, deepspeed] \| None` | No | | |
|
|
| `checkpoint_format` | `Literal[full_state, huggingface] \| None` | No | huggingface | |
|
|
| `chat_template` | `str` | No | `<|user|>`<br/>`{input}`<br/>`<|assistant|>`<br/>`{output}` | |
|
|
| `model_specific_config` | `dict` | No | `{'trust_remote_code': True, 'attn_implementation': 'sdpa'}` | |
|
|
| `max_seq_length` | `int` | No | 2048 | |
|
|
| `gradient_checkpointing` | `bool` | No | False | |
|
|
| `save_total_limit` | `int` | No | 3 | |
|
|
| `logging_steps` | `int` | No | 10 | |
|
|
| `warmup_ratio` | `float` | No | 0.1 | |
|
|
| `weight_decay` | `float` | No | 0.01 | |
|
|
| `dataloader_num_workers` | `int` | No | 4 | |
|
|
| `dataloader_pin_memory` | `bool` | No | True | |
|
|
| `dpo_beta` | `float` | No | 0.1 | |
|
|
| `use_reference_model` | `bool` | No | True | |
|
|
| `dpo_loss_type` | `Literal[sigmoid, hinge, ipo, kto_pair]` | No | sigmoid | |
|
|
| `dpo_output_dir` | `str` | No | | |
|
|
|
|
## Sample Configuration
|
|
|
|
```yaml
|
|
checkpoint_format: huggingface
|
|
distributed_backend: null
|
|
device: cpu
|
|
dpo_output_dir: ~/.llama/dummy/dpo_output
|
|
```
|