forked from phoenix-oss/llama-stack-mirror
# What does this PR do? In this PR, we added a new eval open benchmark IfEval based on paper https://arxiv.org/abs/2311.07911 to measure the model capability of instruction following. ## Test Plan spin up a llama stack server with open-benchmark template run `llama-stack-client --endpoint xxx eval run-benchmark "meta-reference-ifeval" --model-id "meta-llama/Llama-3.3-70B-Instruct" --output-dir "/home/markchen1015/" --num-examples 20` on client side and get the eval aggregate results |
||
---|---|---|
.. | ||
__init__.py | ||
agents.py | ||
datasetio.py | ||
eval.py | ||
inference.py | ||
post_training.py | ||
safety.py | ||
scoring.py | ||
telemetry.py | ||
tool_runtime.py | ||
vector_io.py |