docs: fix typos in evaluation concepts (#1745)

# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]
Typo fix for `output_dir` flag and misspelling of aggregate 
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]
N/A
[//]: # (## Documentation)
This commit is contained in:
Mark Campbell 2025-03-21 19:00:53 +00:00 committed by GitHub
parent 4c14bb7510
commit 711cfa00fc
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -55,7 +55,7 @@ llama stack run llama_stack/templates/open-benchmark/run.yaml
There are 3 necessary inputs to run a benchmark eval There are 3 necessary inputs to run a benchmark eval
- `list of benchmark_ids`: The list of benchmark ids to run evaluation on - `list of benchmark_ids`: The list of benchmark ids to run evaluation on
- `model-id`: The model id to evaluate on - `model-id`: The model id to evaluate on
- `utput_dir`: Path to store the evaluate results - `output_dir`: Path to store the evaluate results
``` ```
llama-stack-client eval run-benchmark <benchmark_id_1> <benchmark_id_2> ... \ llama-stack-client eval run-benchmark <benchmark_id_1> <benchmark_id_2> ... \
--model_id <model id to evaluate on> \ --model_id <model id to evaluate on> \
@ -69,7 +69,7 @@ llama-stack-client eval run-benchmark help
to see the description of all the flags that eval run-benchmark has to see the description of all the flags that eval run-benchmark has
In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggrgate In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggregate
evaluation results over there. evaluation results over there.