mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-18 04:09:47 +00:00
feat(distro): fork off a starter-gpu distribution
This commit is contained in:
parent
4434fcc2c3
commit
c95db3a795
15 changed files with 522 additions and 31 deletions
20
docs/source/providers/post_training/inline_torchtune-gpu.md
Normal file
20
docs/source/providers/post_training/inline_torchtune-gpu.md
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
# inline::torchtune-gpu
|
||||
|
||||
## Description
|
||||
|
||||
TorchTune-based post-training provider for fine-tuning and optimizing models using Meta's TorchTune framework.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `torch_seed` | `int \| None` | No | | |
|
||||
| `checkpoint_format` | `Literal['meta', 'huggingface'` | No | meta | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
checkpoint_format: meta
|
||||
|
||||
```
|
||||
|
||||
Loading…
Add table
Add a link
Reference in a new issue