llama-stack-mirror/llama_stack/providers/inline/post_training/huggingface
Nehanth Narendrula 58ffd82853
fix: Update SFTConfig parameter to fix CI and Post Training Workflow (#2948)
# What does this PR do?

- Change max_seq_length to max_length in SFTConfig constructor
- TRL deprecated max_seq_length in Feb 2024 and removed it in v0.20.0
- Reference: https://github.com/huggingface/trl/pull/2895

This resolves the SFT training failure in CI tests
2025-07-29 11:14:04 -07:00
..
recipes fix: Update SFTConfig parameter to fix CI and Post Training Workflow (#2948) 2025-07-29 11:14:04 -07:00
__init__.py feat: add huggingface post_training impl (#2132) 2025-05-16 14:41:28 -07:00
config.py feat: add huggingface post_training impl (#2132) 2025-05-16 14:41:28 -07:00
post_training.py feat: add huggingface post_training impl (#2132) 2025-05-16 14:41:28 -07:00