llama-stack-mirror/llama_stack/providers/inline/eval/meta_reference
Xi Yan 87ec4243ee
[rag evals][3/n] add ability to eval retrieval + generation in agentic eval pipeline (#668)
# What does this PR do?

- This PR adds the ability s.t. users can evaluate on both retrieval +
generation separately & as a whole by passing an AgentConfig to the
/eval API
- The memory_retrieval context will be stored in the "context" column
used for scoring functions that can evaluate the retrieved context.

## Test Plan
- E2E Test RAG Agent Notebook:
https://gist.github.com/yanxi0830/0377594d29958f9b6f9317ab049fa836

<img width="758" alt="image"
src="https://github.com/user-attachments/assets/58ed9db7-f07b-400a-931b-923b0d612902"
/>

<img width="682" alt="image"
src="https://github.com/user-attachments/assets/9ebd7fbd-2a6d-4c93-92fa-a9456fae2378"
/>



## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-02 11:18:43 -08:00
..
__init__.py [Agentic Eval] add ability to run agents generation (#469) 2024-11-18 11:43:03 -08:00
config.py Add ability to query and export spans to dataset (#574) 2024-12-05 21:07:30 -08:00
eval.py [rag evals][3/n] add ability to eval retrieval + generation in agentic eval pipeline (#668) 2025-01-02 11:18:43 -08:00