mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
The `trl` dependency brings in `accelerate` which brings in nvidia dependencies for torch. We cannot have that in the starter distro. As such, no CPU-only post-training for the huggingface provider.
16 lines
250 B
Markdown
16 lines
250 B
Markdown
# Post_Training
|
|
|
|
## Overview
|
|
|
|
This section contains documentation for all available providers for the **post_training** API.
|
|
|
|
## Providers
|
|
|
|
```{toctree}
|
|
:maxdepth: 1
|
|
|
|
inline_huggingface-gpu
|
|
inline_torchtune-cpu
|
|
inline_torchtune-gpu
|
|
remote_nvidia
|
|
```
|