llama-stack-mirror/llama_stack
Nehanth Narendrula 58ffd82853
fix: Update SFTConfig parameter to fix CI and Post Training Workflow (#2948)
# What does this PR do?

- Change max_seq_length to max_length in SFTConfig constructor
- TRL deprecated max_seq_length in Feb 2024 and removed it in v0.20.0
- Reference: https://github.com/huggingface/trl/pull/2895

This resolves the SFT training failure in CI tests
2025-07-29 11:14:04 -07:00
..
apis feat: add base64 encoded PDF support for OpenAI Chat Completions (#2881) 2025-07-29 06:23:41 -04:00
cli fix: separate build and run provider types (#2917) 2025-07-25 12:39:26 -07:00
distribution chore: revert #2855 (#2939) 2025-07-28 15:30:25 -07:00
models chore(api): add mypy coverage to chat_format (#2654) 2025-07-18 11:56:53 +02:00
providers fix: Update SFTConfig parameter to fix CI and Post Training Workflow (#2948) 2025-07-29 11:14:04 -07:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates feat(openai): add configurable base_url support with OPENAI_BASE_URL env var (#2919) 2025-07-28 10:16:02 -07:00
ui build: Bump version to 0.2.16 2025-07-28 23:13:50 +00:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py fix: use logger for console telemetry (#2844) 2025-07-24 16:26:59 -04:00
schema_utils.py feat(auth): API access control (#2822) 2025-07-24 15:30:48 -07:00