mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 10:54:19 +00:00
# What does this PR do? - This was missed from previous deprecation: https://github.com/meta-llama/llama-stack/pull/1186 - Part of https://github.com/meta-llama/llama-stack/issues/1396 [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan ``` pytest -v -s --nbval-lax ./llama-stack/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb ``` [//]: # (## Documentation)
30 lines
789 B
Markdown
30 lines
789 B
Markdown
## Testing & Evaluation
|
|
|
|
Llama Stack provides built-in tools for evaluating your applications:
|
|
|
|
1. **Benchmarking**: Test against standard datasets
|
|
2. **Application Evaluation**: Score your application's outputs
|
|
3. **Custom Metrics**: Define your own evaluation criteria
|
|
|
|
Here's how to set up basic evaluation:
|
|
|
|
```python
|
|
# Create an evaluation task
|
|
response = client.benchmarks.register(
|
|
benchmark_id="my_eval",
|
|
dataset_id="my_dataset",
|
|
scoring_functions=["accuracy", "relevance"],
|
|
)
|
|
|
|
# Run evaluation
|
|
job = client.eval.run_eval(
|
|
benchmark_id="my_eval",
|
|
benchmark_config={
|
|
"type": "app",
|
|
"eval_candidate": {"type": "agent", "config": agent_config},
|
|
},
|
|
)
|
|
|
|
# Get results
|
|
result = client.eval.job_result(benchmark_id="my_eval", job_id=job.job_id)
|
|
```
|