From 711cfa00fc5aa26b15165e37a06329a791af93fe Mon Sep 17 00:00:00 2001 From: Mark Campbell Date: Fri, 21 Mar 2025 19:00:53 +0000 Subject: [PATCH] docs: fix typos in evaluation concepts (#1745) # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] Typo fix for `output_dir` flag and misspelling of aggregate [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] N/A [//]: # (## Documentation) --- docs/source/concepts/evaluation_concepts.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/concepts/evaluation_concepts.md b/docs/source/concepts/evaluation_concepts.md index abe5898b6..14390c0a2 100644 --- a/docs/source/concepts/evaluation_concepts.md +++ b/docs/source/concepts/evaluation_concepts.md @@ -55,7 +55,7 @@ llama stack run llama_stack/templates/open-benchmark/run.yaml There are 3 necessary inputs to run a benchmark eval - `list of benchmark_ids`: The list of benchmark ids to run evaluation on - `model-id`: The model id to evaluate on -- `utput_dir`: Path to store the evaluate results +- `output_dir`: Path to store the evaluate results ``` llama-stack-client eval run-benchmark ... \ --model_id \ @@ -69,7 +69,7 @@ llama-stack-client eval run-benchmark help to see the description of all the flags that eval run-benchmark has -In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggrgate +In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggregate evaluation results over there.