llama-stack/llama_stack/providers/registry
Botao Chen f369871083
feat: [New Eval Benchamark] IfEval (#1708)
# What does this PR do?
In this PR, we added a new eval open benchmark IfEval based on paper
https://arxiv.org/abs/2311.07911 to measure the model capability of
instruction following.


## Test Plan
spin up a llama stack server with open-benchmark template

run `llama-stack-client --endpoint xxx eval run-benchmark
"meta-reference-ifeval" --model-id "meta-llama/Llama-3.3-70B-Instruct"
--output-dir "/home/markchen1015/" --num-examples 20` on client side and
get the eval aggregate results
2025-03-19 16:39:59 -07:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
agents.py test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
datasetio.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
eval.py feat: [New Eval Benchamark] IfEval (#1708) 2025-03-19 16:39:59 -07:00
inference.py test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
post_training.py Pin torchtune pkg version (#791) 2025-01-16 16:31:13 -08:00
safety.py feat: added nvidia as safety provider (#1248) 2025-03-17 14:39:23 -07:00
scoring.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
telemetry.py test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
tool_runtime.py chore: move embedding deps to RAG tool where they are needed (#1210) 2025-02-21 11:33:41 -08:00
vector_io.py feat: Qdrant inline provider (#1273) 2025-03-18 14:04:21 -07:00