llama-stack-mirror/llama_stack/providers/utils/scoring
Botao Chen f369871083
feat: [New Eval Benchamark] IfEval (#1708)
# What does this PR do?
In this PR, we added a new eval open benchmark IfEval based on paper
https://arxiv.org/abs/2311.07911 to measure the model capability of
instruction following.


## Test Plan
spin up a llama stack server with open-benchmark template

run `llama-stack-client --endpoint xxx eval run-benchmark
"meta-reference-ifeval" --model-id "meta-llama/Llama-3.3-70B-Instruct"
--output-dir "/home/markchen1015/" --num-examples 20` on client side and
get the eval aggregate results
2025-03-19 16:39:59 -07:00
..
__init__.py add missing __init__ 2024-11-25 09:42:46 -08:00
aggregation_utils.py feat: [New Eval Benchamark] IfEval (#1708) 2025-03-19 16:39:59 -07:00
base_scoring_fn.py test: revamp eval related integration tests (#1433) 2025-03-06 10:51:35 -08:00
basic_scoring_utils.py feat: [new open benchmark] Math 500 (#1538) 2025-03-10 20:38:28 -07:00