diff --git a/docs/_static/llama-stack-spec.html b/docs/_static/llama-stack-spec.html
index a5ea562bf..09d4cb805 100644
--- a/docs/_static/llama-stack-spec.html
+++ b/docs/_static/llama-stack-spec.html
@@ -8548,7 +8548,7 @@
},
"additionalProperties": false,
"title": "EvaluationTask",
- "description": "A task for evaluation. To specify a task, one of the following must be provided: - `benchmark_id`: Run evaluation task against a benchmark_id - `dataset_id` and `grader_ids`: Run evaluation task against a dataset_id and a list of grader_ids - `data_source` and `grader_ids`: Run evaluation task against a data source (e.g. rows, uri, etc.) and a list of grader_ids"
+ "description": "A task for evaluation. To specify a task, one of the following must be provided: - `benchmark_id`: Run evaluation task against a benchmark_id. Use this when you have a curated dataset and have settled on the graders. - `dataset_id` and `grader_ids`: Run evaluation task against a dataset_id and a list of grader_ids. Use this when you have datasets and / or are iterating on your graders. - `data_source` and `grader_ids`: Run evaluation task against a data source (e.g. rows, uri, etc.) and a list of grader_ids. Prefer this when you are early in your evaluation cycle and experimenting much more with your data and graders."
},
"GradeRequest": {
"type": "object",
diff --git a/docs/_static/llama-stack-spec.yaml b/docs/_static/llama-stack-spec.yaml
index 5676c91b6..72361c50e 100644
--- a/docs/_static/llama-stack-spec.yaml
+++ b/docs/_static/llama-stack-spec.yaml
@@ -5924,10 +5924,14 @@ components:
title: EvaluationTask
description: >-
A task for evaluation. To specify a task, one of the following must be provided:
- - `benchmark_id`: Run evaluation task against a benchmark_id - `dataset_id`
- and `grader_ids`: Run evaluation task against a dataset_id and a list of grader_ids
- - `data_source` and `grader_ids`: Run evaluation task against a data source
- (e.g. rows, uri, etc.) and a list of grader_ids
+ - `benchmark_id`: Run evaluation task against a benchmark_id. Use this when
+ you have a curated dataset and have settled on the graders. - `dataset_id`
+ and `grader_ids`: Run evaluation task against a dataset_id and a list of grader_ids.
+ Use this when you have datasets and / or are iterating on your graders. -
+ `data_source` and `grader_ids`: Run evaluation task against a data source
+ (e.g. rows, uri, etc.) and a list of grader_ids. Prefer this when you are
+ early in your evaluation cycle and experimenting much more with your data
+ and graders.
GradeRequest:
type: object
properties: