llama-stack-mirror/llama_stack/providers
Botao Chen 06cb0c837e
[torchtune integration] post training + eval (#670)
## What does this PR do?

- Add related Apis in experimental-post-training template to enable eval
on the finetuned checkpoint in the template
- A small bug fix on meta reference eval
- A small error handle improvement on post training 


## Test Plan
From client side issued an E2E post training request
https://github.com/meta-llama/llama-stack-client-python/pull/70 and get
eval results successfully

<img width="1315" alt="Screenshot 2024-12-20 at 12 06 59 PM"
src="https://github.com/user-attachments/assets/a09bd524-59ae-490c-908f-2e36ccf27c0a"
/>
2024-12-20 13:43:13 -08:00
..
inline [torchtune integration] post training + eval (#670) 2024-12-20 13:43:13 -08:00
registry Tools API with brave and MCP providers (#639) 2024-12-19 21:25:17 -08:00
remote Tools API with brave and MCP providers (#639) 2024-12-19 21:25:17 -08:00
tests [4/n][torchtune integration] support lazy load model during inference (#620) 2024-12-18 16:30:53 -08:00
utils Add Llama 70B 3.3 to fireworks (#654) 2024-12-19 17:32:49 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py Tools API with brave and MCP providers (#639) 2024-12-19 21:25:17 -08:00