llama-stack/.github
Botao Chen f369871083
feat: [New Eval Benchamark] IfEval (#1708)
# What does this PR do?
In this PR, we added a new eval open benchmark IfEval based on paper
https://arxiv.org/abs/2311.07911 to measure the model capability of
instruction following.


## Test Plan
spin up a llama stack server with open-benchmark template

run `llama-stack-client --endpoint xxx eval run-benchmark
"meta-reference-ifeval" --model-id "meta-llama/Llama-3.3-70B-Instruct"
--output-dir "/home/markchen1015/" --num-examples 20` on client side and
get the eval aggregate results
2025-03-19 16:39:59 -07:00
..
ISSUE_TEMPLATE github: issue templates automatically apply relevant label (#956) 2025-02-04 14:44:03 -08:00
workflows feat: [New Eval Benchamark] IfEval (#1708) 2025-03-19 16:39:59 -07:00
CODEOWNERS chore: Update CODEOWNERS (#1407) 2025-03-04 21:48:24 -08:00
dependabot.yml ci: Add dependabot scans for Python deps (#1618) 2025-03-17 20:20:31 -07:00
PULL_REQUEST_TEMPLATE.md docs: remove changelog mention from PR template (#1049) 2025-02-11 13:24:53 -05:00
TRIAGERS.md chore: Add triagers list #1561 (#1701) 2025-03-19 09:59:17 -07:00