From 932c09b35cb82457923143488c0476c5c95dd14b Mon Sep 17 00:00:00 2001 From: Xi Yan Date: Fri, 13 Dec 2024 13:54:31 -0800 Subject: [PATCH] restructure --- docs/source/benchmark_evaluations/index.md | 167 ++++++++++++++++++ docs/source/building_applications/index.md | 4 +- docs/source/concepts/evaluation_concepts.md | 40 +++++ docs/source/concepts/index.md | 10 ++ docs/source/cookbooks/index.md | 15 -- docs/source/index.md | 2 +- .../evals_reference/index.md} | 0 .../resources/eval-concept.png | Bin .../evals_reference}/resources/eval-flow.png | Bin docs/source/references/index.md | 1 + 10 files changed, 222 insertions(+), 17 deletions(-) create mode 100644 docs/source/benchmark_evaluations/index.md create mode 100644 docs/source/concepts/evaluation_concepts.md delete mode 100644 docs/source/cookbooks/index.md rename docs/source/{cookbooks/evals.md => references/evals_reference/index.md} (100%) rename docs/source/{cookbooks => references/evals_reference}/resources/eval-concept.png (100%) rename docs/source/{cookbooks => references/evals_reference}/resources/eval-flow.png (100%) diff --git a/docs/source/benchmark_evaluations/index.md b/docs/source/benchmark_evaluations/index.md new file mode 100644 index 000000000..240555936 --- /dev/null +++ b/docs/source/benchmark_evaluations/index.md @@ -0,0 +1,167 @@ +# Benchmark Evaluations + +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing) + +Llama Stack provides the building blocks needed to run benchmark and application evaluations. This guide will walk you through how to use these components to run open benchmark evaluations. Visit our [Evaluation Concepts](../concepts/evaluation_concepts.md) guide for more details on how evaluations work in Llama Stack, and our [Evaluation Reference](../references/evals_reference/index.md) guide for a comprehensive reference on the APIs. Check out our [Colab notebook](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing) on working examples on how you can use Llama Stack for running benchmark evaluations. + +### 1. Open Benchmark Model Evaluation + +This first example walks you through how to evaluate a model candidate served by Llama Stack on open benchmarks. We will use the following benchmark: +- [MMMU](https://arxiv.org/abs/2311.16502) (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI): Benchmark designed to evaluate multimodal models. +- [SimpleQA](https://openai.com/index/introducing-simpleqa/): Benchmark designed to access models to answer short, fact-seeking questions. + +#### 1.1 Running MMMU +- We will use a pre-processed MMMU dataset from [llamastack/mmmu](https://huggingface.co/datasets/llamastack/mmmu). The preprocessing code is shown in in this [Github Gist](https://gist.github.com/yanxi0830/118e9c560227d27132a7fd10e2c92840). The dataset is obtained by transforming the original [MMMU/MMMU](https://huggingface.co/datasets/MMMU/MMMU) dataset into correct format by `inference/chat-completion` API. + +```python +import datasets +ds = datasets.load_dataset(path="llamastack/mmmu", name="Agriculture", split="dev") +ds = ds.select_columns(["chat_completion_input", "input_query", "expected_answer"]) +eval_rows = ds.to_pandas().to_dict(orient="records") +``` + +- Next, we will run evaluation on an model candidate, we will need to: + - Define a system prompt + - Define an EvalCandidate + - Run evaluate on the dataset + +```python +SYSTEM_PROMPT_TEMPLATE = """ +You are an expert in Agriculture whose job is to answer questions from the user using images. +First, reason about the correct answer. +Then write the answer in the following format where X is exactly one of A,B,C,D: +Answer: X +Make sure X is one of A,B,C,D. +If you are uncertain of the correct answer, guess the most likely one. +""" + +system_message = { + "role": "system", + "content": SYSTEM_PROMPT_TEMPLATE, +} + +client.eval_tasks.register( + eval_task_id="meta-reference::mmmu", + dataset_id=f"mmmu-{subset}-{split}", + scoring_functions=["basic::regex_parser_multiple_choice_answer"] +) + +response = client.eval.evaluate_rows( + task_id="meta-reference::mmmu", + input_rows=eval_rows, + scoring_functions=["basic::regex_parser_multiple_choice_answer"], + task_config={ + "type": "benchmark", + "eval_candidate": { + "type": "model", + "model": "meta-llama/Llama-3.2-90B-Vision-Instruct", + "sampling_params": { + "temperature": 0.0, + "max_tokens": 4096, + "top_p": 0.9, + "repeat_penalty": 1.0, + }, + "system_message": system_message + } + } +) +``` + +#### 1.2. Running SimpleQA +- We will use a pre-processed SimpleQA dataset from [llamastack/evals](https://huggingface.co/datasets/llamastack/evals/viewer/evals__simpleqa) which is obtained by transforming the input query into correct format accepted by `inference/chat-completion` API. +- Since we will be using this same dataset in our next example for Agentic evaluation, we will register it using the `/datasets` API, and interact with it through `/datasetio` API. + +```python +simpleqa_dataset_id = "huggingface::simpleqa" + +_ = client.datasets.register( + dataset_id=simpleqa_dataset_id, + provider_id="huggingface", + url={"uri": "https://huggingface.co/datasets/llamastack/evals"}, + metadata={ + "path": "llamastack/evals", + "name": "evals__simpleqa", + "split": "train", + }, + dataset_schema={ + "input_query": {"type": "string"}, + "expected_answer": {"type": "string"}, + "chat_completion_input": {"type": "chat_completion_input"}, + } +) + +eval_rows = client.datasetio.get_rows_paginated( + dataset_id=simpleqa_dataset_id, + rows_in_page=5, +) +``` + +```python +client.eval_tasks.register( + eval_task_id="meta-reference::simpleqa", + dataset_id=simpleqa_dataset_id, + scoring_functions=["llm-as-judge::405b-simpleqa"] +) + +response = client.eval.evaluate_rows( + task_id="meta-reference::simpleqa", + input_rows=eval_rows.rows, + scoring_functions=["llm-as-judge::405b-simpleqa"], + task_config={ + "type": "benchmark", + "eval_candidate": { + "type": "model", + "model": "meta-llama/Llama-3.2-90B-Vision-Instruct", + "sampling_params": { + "temperature": 0.0, + "max_tokens": 4096, + "top_p": 0.9, + "repeat_penalty": 1.0, + }, + } + } +) +``` + + +### 2. Agentic Evaluation +- In this example, we will demonstrate how to evaluate a agent candidate served by Llama Stack via `/agent` API. +- We will continue to use the SimpleQA dataset we used in previous example. +- Instead of running evaluation on model, we will run the evaluation on a Search Agent with access to search tool. We will define our agent evaluation candidate through `AgentConfig`. + +```python +agent_config = { + "model": "meta-llama/Llama-3.1-405B-Instruct", + "instructions": "You are a helpful assistant", + "sampling_params": { + "strategy": "greedy", + "temperature": 0.0, + "top_p": 0.95, + }, + "tools": [ + { + "type": "brave_search", + "engine": "tavily", + "api_key": userdata.get("TAVILY_SEARCH_API_KEY") + } + ], + "tool_choice": "auto", + "tool_prompt_format": "json", + "input_shields": [], + "output_shields": [], + "enable_session_persistence": False +} + +response = client.eval.evaluate_rows( + task_id="meta-reference::simpleqa", + input_rows=eval_rows.rows, + scoring_functions=["llm-as-judge::405b-simpleqa"], + task_config={ + "type": "benchmark", + "eval_candidate": { + "type": "agent", + "config": agent_config, + } + } +) +``` diff --git a/docs/source/building_applications/index.md b/docs/source/building_applications/index.md index 6e2062204..0b3a9a406 100644 --- a/docs/source/building_applications/index.md +++ b/docs/source/building_applications/index.md @@ -1,6 +1,8 @@ # Building AI Applications -Llama Stack provides all the building blocks needed to create sophisticated AI applications. This guide will walk you through how to use these components effectively. +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1F2ksmkoGQPa4pzRjMOE6BXWeOxWFIW6n?usp=sharing) + +Llama Stack provides all the building blocks needed to create sophisticated AI applications. This guide will walk you through how to use these components effectively. Check out our Colab notebook on to follow along working examples on how you can build LLM-powered agentic applications using Llama Stack. ## Basic Inference diff --git a/docs/source/concepts/evaluation_concepts.md b/docs/source/concepts/evaluation_concepts.md new file mode 100644 index 000000000..399d99d92 --- /dev/null +++ b/docs/source/concepts/evaluation_concepts.md @@ -0,0 +1,40 @@ +# Evaluation Concepts + +The Llama Stack Evaluation flow allows you to run evaluations on your GenAI application datasets or pre-registered benchmarks. + +We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications. +- `/datasetio` + `/datasets` API +- `/scoring` + `/scoring_functions` API +- `/eval` + `/eval_tasks` API + +This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations [here](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing). + + +## Evaluation Concepts + +The Evaluation APIs are associated with a set of Resources as shown in the following diagram. Please visit the Resources section in our [Core Concepts](../concepts/index.md) guide for better high-level understanding. + +![Eval Concepts](../references/evals_reference/resources/eval-concept.png) + +- **DatasetIO**: defines interface with datasets and data loaders. + - Associated with `Dataset` resource. +- **Scoring**: evaluate outputs of the system. + - Associated with `ScoringFunction` resource. We provide a suite of out-of-the box scoring functions and also the ability for you to add custom evaluators. These scoring functions are the core part of defining an evaluation task to output evaluation metrics. +- **Eval**: generate outputs (via Inference or Agents) and perform scoring. + - Associated with `EvalTask` resource. + + +Use the following decision tree to decide how to use LlamaStack Evaluation flow. +![Eval Flow](../references/evals_reference/resources/eval-flow.png) + + +```{admonition} Note on Benchmark v.s. Application Evaluation +:class: tip +- **Benchmark Evaluation** is a well-defined eval-task consisting of `dataset` and `scoring_function`. The generation (inference or agent) will be done as part of evaluation. +- **Application Evaluation** assumes users already have app inputs & generated outputs. Evaluation will purely focus on scoring the generated outputs via scoring functions (e.g. LLM-as-judge). +``` + +## What's Next? + +- Check out our Colab notebook on working examples with evaluations [here](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing). +- Check out our [Evaluation Reference](../references/evals_reference/index.md) for more details on the APIs. diff --git a/docs/source/concepts/index.md b/docs/source/concepts/index.md index d7c88cbf9..32caa66a5 100644 --- a/docs/source/concepts/index.md +++ b/docs/source/concepts/index.md @@ -62,3 +62,13 @@ While there is a lot of flexibility to mix-and-match providers, often users will **On-device Distro**: Finally, you may want to run Llama Stack directly on an edge device (mobile phone or a tablet.) We provide Distros for iOS and Android (coming soon.) + +## More Concepts +- [Evaluation Concepts](evaluation_concepts.md) + +```{toctree} +:maxdepth: 1 +:hidden: + +evaluation_concepts +``` diff --git a/docs/source/cookbooks/index.md b/docs/source/cookbooks/index.md deleted file mode 100644 index 5c29decf3..000000000 --- a/docs/source/cookbooks/index.md +++ /dev/null @@ -1,15 +0,0 @@ -# Llama Stack Cookbooks - -In these sets of cookbooks, we will walk you through the main sets of APIs we offer with Llama Stack with working examples to explore the possibilities that Llama Stack opens up for you. - - -- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1F2ksmkoGQPa4pzRjMOE6BXWeOxWFIW6n?usp=sharing)[**Llama Stack Building AI Applications**](../building_applications/index): How you can build LLM-powered agentic applications using Llama Stack. - - -- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing)[**Llama Stack Evaluations Flow**](evals): How you can use Llama Stack for running evaluations on your LLM-powered applications. - -```{toctree} -:maxdepth: 2 -:hidden: -evals -``` diff --git a/docs/source/index.md b/docs/source/index.md index 48830d94c..cf7c0b236 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -59,8 +59,8 @@ getting_started/index concepts/index distributions/index building_applications/index +benchmark_evaluations/index playground/index -cookbooks/index contributing/index references/index ``` diff --git a/docs/source/cookbooks/evals.md b/docs/source/references/evals_reference/index.md similarity index 100% rename from docs/source/cookbooks/evals.md rename to docs/source/references/evals_reference/index.md diff --git a/docs/source/cookbooks/resources/eval-concept.png b/docs/source/references/evals_reference/resources/eval-concept.png similarity index 100% rename from docs/source/cookbooks/resources/eval-concept.png rename to docs/source/references/evals_reference/resources/eval-concept.png diff --git a/docs/source/cookbooks/resources/eval-flow.png b/docs/source/references/evals_reference/resources/eval-flow.png similarity index 100% rename from docs/source/cookbooks/resources/eval-flow.png rename to docs/source/references/evals_reference/resources/eval-flow.png diff --git a/docs/source/references/index.md b/docs/source/references/index.md index d85bb7820..51e3dd0ba 100644 --- a/docs/source/references/index.md +++ b/docs/source/references/index.md @@ -14,4 +14,5 @@ python_sdk_reference/index llama_cli_reference/index llama_stack_client_cli_reference llama_cli_reference/download_models +evals_reference/index ```