llama-stack-mirror/llama_stack/providers
Charlie Doern d6228bb90e fix: proper checkpointing logic for HF trainer
currently only the last saved model is reported as a checkpoint and associated with the job UUID. since the HF trainer handles checkpoint collection during training, we need to add all of the `checkpoint-*` folders as Checkpoint objects. Adjust the save strategy to be per-epoch to make this easier and to use less storage

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-06-25 20:01:36 -04:00
..
inline fix: proper checkpointing logic for HF trainer 2025-06-25 20:01:36 -04:00
registry feat: File search tool for Responses API (#2426) 2025-06-13 14:32:48 -04:00
remote feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00
utils fix: resume responses with tool call output (#2524) 2025-06-25 14:43:37 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py fix(tools): do not index tools, only index toolgroups (#2261) 2025-05-25 13:27:52 -07:00