forked from phoenix-oss/llama-stack-mirror
# What does this PR do? In this PR, we added a new eval open benchmark IfEval based on paper https://arxiv.org/abs/2311.07911 to measure the model capability of instruction following. ## Test Plan spin up a llama stack server with open-benchmark template run `llama-stack-client --endpoint xxx eval run-benchmark "meta-reference-ifeval" --model-id "meta-llama/Llama-3.3-70B-Instruct" --output-dir "/home/markchen1015/" --num-examples 20` on client side and get the eval aggregate results |
||
---|---|---|
.. | ||
css | ||
llama-stack-logo.png | ||
llama-stack-spec.html | ||
llama-stack-spec.yaml | ||
llama-stack.png | ||
remote_or_local.gif | ||
safety_system.webp |