forked from phoenix-oss/llama-stack-mirror
# What does this PR do? - Update `/eval-tasks` to `/benchmarks` - ⚠️ Remove differentiation between `app` v.s. `benchmark` eval task config. Now we only have `BenchmarkConfig`. The overloaded `benchmark` is confusing and do not add any value. Backward compatibility is being kept as the "type" is not being used anywhere. [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan - This change is backward compatible - Run notebook test with ``` pytest -v -s --nbval-lax ./docs/getting_started.ipynb pytest -v -s --nbval-lax ./docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb ``` <img width="846" alt="image" src="https://github.com/user-attachments/assets/d2fc06a7-593a-444f-bc1f-10ab9b0c843d" /> [//]: # (## Documentation) [//]: # (- [ ] Added a Changelog entry if the change is significant) --------- Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com> Signed-off-by: Ben Browning <bbrownin@redhat.com> Signed-off-by: Sébastien Han <seb@redhat.com> Signed-off-by: reidliu <reid201711@gmail.com> Co-authored-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com> Co-authored-by: Ben Browning <ben324@gmail.com> Co-authored-by: Sébastien Han <seb@redhat.com> Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com> Co-authored-by: reidliu <reid201711@gmail.com> Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
30 lines
784 B
Markdown
30 lines
784 B
Markdown
## Testing & Evaluation
|
|
|
|
Llama Stack provides built-in tools for evaluating your applications:
|
|
|
|
1. **Benchmarking**: Test against standard datasets
|
|
2. **Application Evaluation**: Score your application's outputs
|
|
3. **Custom Metrics**: Define your own evaluation criteria
|
|
|
|
Here's how to set up basic evaluation:
|
|
|
|
```python
|
|
# Create an evaluation task
|
|
response = client.benchmarks.register(
|
|
benchmark_id="my_eval",
|
|
dataset_id="my_dataset",
|
|
scoring_functions=["accuracy", "relevance"],
|
|
)
|
|
|
|
# Run evaluation
|
|
job = client.eval.run_eval(
|
|
benchmark_id="my_eval",
|
|
task_config={
|
|
"type": "app",
|
|
"eval_candidate": {"type": "agent", "config": agent_config},
|
|
},
|
|
)
|
|
|
|
# Get results
|
|
result = client.eval.job_result(benchmark_id="my_eval", job_id=job.job_id)
|
|
```
|