forked from phoenix-oss/llama-stack-mirror
# What does this PR do? In this PR, we added a new eval open benchmark IfEval based on paper https://arxiv.org/abs/2311.07911 to measure the model capability of instruction following. ## Test Plan spin up a llama stack server with open-benchmark template run `llama-stack-client --endpoint xxx eval run-benchmark "meta-reference-ifeval" --model-id "meta-llama/Llama-3.3-70B-Instruct" --output-dir "/home/markchen1015/" --num-examples 20` on client side and get the eval aggregate results |
||
---|---|---|
.. | ||
bedrock | ||
cerebras | ||
dell-tgi | ||
fireworks | ||
meta-reference-gpu | ||
meta-reference-quantized-gpu | ||
ollama | ||
remote-nvidia | ||
remote-vllm | ||
runpod | ||
sambanova | ||
tgi | ||
together | ||
vllm-gpu | ||
dependencies.json |