forked from phoenix-oss/llama-stack-mirror
Update llama_stack/apis/evaluation/evaluation.py
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
This commit is contained in:
parent
820b9a00c7
commit
913e6eb50f
1 changed files with 1 additions and 1 deletions
|
@ -52,7 +52,7 @@ class EvaluationTask(BaseModel):
|
|||
"""
|
||||
A task for evaluation. To specify a task, one of the following must be provided:
|
||||
- `benchmark_id`: Run evaluation task against a benchmark_id. Use this when you have a curated dataset and have settled on the graders.
|
||||
- `dataset_id` and `grader_ids`: Run evaluation task against a dataset_id and a list of grader_ids
|
||||
- `dataset_id` and `grader_ids`: Run evaluation task against a dataset_id and a list of grader_ids. Use this when you have datasets and / or are iterating on your graders.
|
||||
- `data_source` and `grader_ids`: Run evaluation task against a data source (e.g. rows, uri, etc.) and a list of grader_ids. Prefer this when you are early in your evaluation cycle and experimenting much more with your data and graders.
|
||||
|
||||
:param benchmark_id: The benchmark ID to evaluate.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue