llama-stack-mirror/llama_stack
Charlie Doern 65b4fae51d
fix: proper checkpointing logic for HF trainer (#2429)
# What does this PR do?

currently only the last saved model is reported as a checkpoint and
associated with the job UUID. since the HF trainer handles checkpoint
collection during training, we need to add all of the `checkpoint-*`
folders as Checkpoint objects. Adjust the save strategy to be per-epoch
to make this easier and to use less storage

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-06-27 17:36:25 -04:00
..
apis chore: standardize unsupported model error #2517 (#2518) 2025-06-27 14:26:58 -04:00
cli fix: stack build (#2485) 2025-06-20 15:15:43 -07:00
distribution fix: dataset metadata without provider_id (#2527) 2025-06-27 08:51:29 -04:00
models fix: finish conversion to StrEnum (#2514) 2025-06-26 08:01:26 +05:30
providers fix: proper checkpointing logic for HF trainer (#2429) 2025-06-27 17:36:25 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates fix: Some missed env variable changes from PR 2490 (#2538) 2025-06-26 17:59:15 -07:00
ui fix(ui): ensure initial data fetch only happens once (#2486) 2025-06-24 12:22:55 +02:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00