mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 19:04:19 +00:00
## What does this PR do? - Provide a distro template to let developer easily run the open benchmarks llama stack supports on llama and non-llama models. - Provide doc on how to run open benchmark eval via CLI and open benchmark contributing guide [//]: # (If resolving an issue, uncomment and update the line below) (Closes #1375 ) ## Test Plan open benchmark eval results on llama, gpt, gemini and clause <img width="771" alt="Screenshot 2025-03-06 at 7 33 05 PM" src="https://github.com/user-attachments/assets/1bd85456-b9b9-4b37-af76-4ce1d2bac00e" /> doc preview <img width="944" alt="Screenshot 2025-03-06 at 7 33 58 PM" src="https://github.com/user-attachments/assets/f4e5866d-b395-4c40-aa8b-080edeb5cdb6" /> <img width="955" alt="Screenshot 2025-03-06 at 7 34 04 PM" src="https://github.com/user-attachments/assets/629defb6-d5e4-473c-aa03-308bce386fb4" /> <img width="965" alt="Screenshot 2025-03-06 at 7 35 29 PM" src="https://github.com/user-attachments/assets/c21ff96c-9e8c-4c54-b6b8-25883125f4cf" /> <img width="957" alt="Screenshot 2025-03-06 at 7 35 37 PM" src="https://github.com/user-attachments/assets/47571c90-1381-4e2c-bbed-c4f3a60578d0" />
81 lines
4 KiB
Markdown
81 lines
4 KiB
Markdown
# Evaluation Concepts
|
|
|
|
The Llama Stack Evaluation flow allows you to run evaluations on your GenAI application datasets or pre-registered benchmarks.
|
|
|
|
We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications.
|
|
- `/datasetio` + `/datasets` API
|
|
- `/scoring` + `/scoring_functions` API
|
|
- `/eval` + `/benchmarks` API
|
|
|
|
This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations [here](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing).
|
|
|
|
|
|
## Evaluation Concepts
|
|
|
|
The Evaluation APIs are associated with a set of Resources as shown in the following diagram. Please visit the Resources section in our [Core Concepts](../concepts/index.md) guide for better high-level understanding.
|
|
|
|

|
|
|
|
- **DatasetIO**: defines interface with datasets and data loaders.
|
|
- Associated with `Dataset` resource.
|
|
- **Scoring**: evaluate outputs of the system.
|
|
- Associated with `ScoringFunction` resource. We provide a suite of out-of-the box scoring functions and also the ability for you to add custom evaluators. These scoring functions are the core part of defining an evaluation task to output evaluation metrics.
|
|
- **Eval**: generate outputs (via Inference or Agents) and perform scoring.
|
|
- Associated with `Benchmark` resource.
|
|
|
|
|
|
## Open-benchmark Eval
|
|
|
|
### List of open-benchmarks Llama Stack support
|
|
|
|
Llama stack pre-registers several popular open-benchmarks to easily evaluate model perfomance via CLI.
|
|
|
|
The list of open-benchmarks we currently support:
|
|
- [MMLU-COT](https://arxiv.org/abs/2009.03300) (Measuring Massive Multitask Language Understanding): Benchmark designed to comprehensively evaluate the breadth and depth of a model's academic and professional understanding
|
|
- [GPQA-COT](https://arxiv.org/abs/2311.12022) (A Graduate-Level Google-Proof Q&A Benchmark): A challenging benchmark of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry.
|
|
- [SimpleQA](https://openai.com/index/introducing-simpleqa/): Benchmark designed to access models to answer short, fact-seeking questions.
|
|
- [MMMU](https://arxiv.org/abs/2311.16502) (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)]: Benchmark designed to evaluate multimodal models.
|
|
|
|
|
|
You can follow this contributing guidance to add more open-benchmarks to Llama Stack
|
|
|
|
### Run evaluation on open-benchmarks via CLI
|
|
|
|
We have built-in functionality to run the supported open-benckmarks using llama-stack-client CLI
|
|
|
|
#### Spin up Llama Stack server
|
|
|
|
Spin up llama stack server with 'open-benchmark' template
|
|
```
|
|
llama stack run llama_stack/templates/open-benchmark/run.yaml
|
|
|
|
```
|
|
|
|
#### Run eval CLI
|
|
There are 3 necessary inputs to run a benchmark eval
|
|
- `list of benchmark_ids`: The list of benchmark ids to run evaluation on
|
|
- `model-id`: The model id to evaluate on
|
|
- `utput_dir`: Path to store the evaluate results
|
|
```
|
|
llama-stack-client eval run-benchmark <benchmark_id_1> <benchmark_id_2> ... \
|
|
--model_id <model id to evaluate on> \
|
|
--output_dir <directory to store the evaluate results> \
|
|
```
|
|
|
|
You can run
|
|
```
|
|
llama-stack-client eval run-benchmark help
|
|
```
|
|
to see the description of all the flags that eval run-benchmark has
|
|
|
|
|
|
In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggrgate
|
|
evaluation results over there.
|
|
|
|
|
|
|
|
## What's Next?
|
|
|
|
- Check out our Colab notebook on working examples with running benchmark evaluations [here](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb#scrollTo=mxLCsP4MvFqP).
|
|
- Check out our [Building Applications - Evaluation](../building_applications/evals.md) guide for more details on how to use the Evaluation APIs to evaluate your applications.
|
|
- Check out our [Evaluation Reference](../references/evals_reference/index.md) for more details on the APIs.
|