mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 10:54:19 +00:00
815 B
815 B
Testing & Evaluation
Llama Stack provides built-in tools for evaluating your applications:
- Benchmarking: Test against standard datasets
- Application Evaluation: Score your application's outputs
- Custom Metrics: Define your own evaluation criteria
Here's how to set up basic evaluation:
# Create an evaluation task
response = client.eval_tasks.register(
eval_task_id="my_eval",
dataset_id="my_dataset",
scoring_functions=["accuracy", "relevance"]
)
# Run evaluation
job = client.eval.run_eval(
task_id="my_eval",
task_config={
"type": "app",
"eval_candidate": {
"type": "agent",
"config": agent_config
}
}
)
# Get results
result = client.eval.job_result(
task_id="my_eval",
job_id=job.job_id
)