update doc

This commit is contained in:
Botao Chen 2025-03-07 15:01:26 -08:00
parent 868c1557a0
commit 3e301fb16d
3 changed files with 3 additions and 3 deletions

View file

@ -37,7 +37,7 @@ The list of open-benchmarks we currently support:
- [MMMU](https://arxiv.org/abs/2311.16502) (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)]: Benchmark designed to evaluate multimodal models.
You can follow this [contributing guide](../references/evals_reference/index.html#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack
You can follow this [contributing guide](https://llama-stack.readthedocs.io/en/latest/references/evals_reference/index.html#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack
### Run evaluation on open-benchmarks via CLI

View file

@ -372,7 +372,7 @@ The purpose of scoring function is to calculate the score for each example based
Firstly, you can see if the existing [llama stack scoring functions](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/inline/scoring) can fulfill your need. If not, you need to write a new scoring function based on what benchmark author / other open source repo describe.
### Add new benchmark into template
Firstly, you need to add the evaluation dataset associated with your benchmark under `datasets` resource in templates/open-benchmark/run.yaml
Firstly, you need to add the evaluation dataset associated with your benchmark under `datasets` resource in the [open-benchmark](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/templates/open-benchmark/run.yaml)
Secondly, you need to add the new benchmark you just created under the `benchmarks` resource in the same template. To add the new benchmark, you need to have
- `benchmark_id`: identifier of the benchmark

View file

@ -1,5 +1,5 @@
version: '2'
image_name: dev
image_name: open-benchmark
apis:
- agents
- datasetio