llama-stack-mirror/llama_stack
Charlie Doern d6228bb90e fix: proper checkpointing logic for HF trainer
currently only the last saved model is reported as a checkpoint and associated with the job UUID. since the HF trainer handles checkpoint collection during training, we need to add all of the `checkpoint-*` folders as Checkpoint objects. Adjust the save strategy to be per-epoch to make this easier and to use less storage

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-06-25 20:01:36 -04:00
..
apis feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00
cli fix: stack build (#2485) 2025-06-20 15:15:43 -07:00
distribution fix: Ollama should be optional in starter distro (#2482) 2025-06-25 15:54:00 +02:00
models ci: add python package build test (#2457) 2025-06-19 18:57:32 +05:30
providers fix: proper checkpointing logic for HF trainer 2025-06-25 20:01:36 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates fix: Ollama should be optional in starter distro (#2482) 2025-06-25 15:54:00 +02:00
ui fix(ui): ensure initial data fetch only happens once (#2486) 2025-06-24 12:22:55 +02:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py ci: fix external provider test (#2438) 2025-06-12 16:14:32 +02:00
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00