mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
## What does this PR do? - Add related Apis in experimental-post-training template to enable eval on the finetuned checkpoint in the template - A small bug fix on meta reference eval - A small error handle improvement on post training ## Test Plan From client side issued an E2E post training request https://github.com/meta-llama/llama-stack-client-python/pull/70 and get eval results successfully <img width="1315" alt="Screenshot 2024-12-20 at 12 06 59 PM" src="https://github.com/user-attachments/assets/a09bd524-59ae-490c-908f-2e36ccf27c0a" /> |
||
---|---|---|
.. | ||
bedrock | ||
cerebras | ||
experimental-post-training | ||
fireworks | ||
hf-endpoint | ||
hf-serverless | ||
meta-reference-gpu | ||
meta-reference-quantized-gpu | ||
ollama | ||
remote-vllm | ||
tgi | ||
together | ||
vllm-gpu | ||
__init__.py | ||
template.py |