llama-stack/llama_stack
Botao Chen e86271aeac
support llama3.1 8B instruct in post training (#698)
## What does this PR do? 
- Change to support llama3.1 8B instruct model other than llama3 8B
model as llama3.1 8B instruct model is a better model to finetune on top
of
- Make the copy files logic in checkpointer safer in case the file be
copied doesn't exist in source path

## test
issue a post training request from client and verify training works as
expect
<img width="1101" alt="Screenshot 2025-01-02 at 12 18 45 PM"
src="https://github.com/user-attachments/assets/47cc4df9-3edc-4afd-b5dd-abe1f039f1ed"
/>

<img width="782" alt="Screenshot 2025-01-02 at 12 18 52 PM"
src="https://github.com/user-attachments/assets/b9435274-ef1d-4570-bd8e-0880c3a4b2e9"
/>
2025-01-03 17:33:05 -08:00
..
apis [Post training] make validation steps configurable (#715) 2025-01-03 08:43:24 -08:00
cli [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
distribution Fix incorrect entrypoint for broken llama stack run (#706) 2025-01-03 09:47:10 -08:00
providers support llama3.1 8B instruct in post training (#698) 2025-01-03 17:33:05 -08:00
scripts Fix to conda env build script 2024-12-17 12:19:34 -08:00
templates Change post training run.yaml inference config (#710) 2025-01-03 08:37:48 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00