llama-stack/llama_stack/apis
Xi Yan 3a269c4635
[rag evals] refactor & add ability to eval retrieval + generation in agentic eval pipeline (#664)
# What does this PR do?

- See https://github.com/meta-llama/llama-stack/pull/666 &
https://github.com/meta-llama/llama-stack/pull/668

- Refactor BaseScoringFn to be just a minimal interface, add new
RegistrableBaseScoring
- Refactor data schema check
- To separately evaluate retrieval component in RAG, we will have
scoring functions needing "context" column additionally.
- Refactor braintrust eval (more scoring fn added & tested in following
PR)

## Test Plan

```
pytest -v -s -m llm_as_judge_scoring_together_inference scoring/test_scoring.py --judge-model meta-llama/Llama-3.2-3B-Instruct
pytest -v -s -m basic_scoring_together_inference scoring/test_scoring.py
pytest -v -s -m braintrust_scoring_together_inference scoring/test_scoring.py
```

<img width="847" alt="image"
src="https://github.com/user-attachments/assets/d099cb2d-6f9c-4bdf-9d0d-f388cf758c0f"
/>

```
pytest -v -s -m meta_reference_eval_together_inference eval/test_eval.py
pytest -v -s -m meta_reference_eval_together_inference_huggingface_datasetio eval/test_eval.py
```
<img width="850" alt="image"
src="https://github.com/user-attachments/assets/dce28fc3-0493-4d34-820a-567260873cc8"
/>



## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-02 11:21:33 -08:00
..
agents [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
batch_inference [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
common [bugfix] fix broken vision inference, change serialization for bytes (#693) 2024-12-30 13:57:41 -08:00
datasetio [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
datasets Update the "InterleavedTextMedia" type (#635) 2024-12-17 11:18:31 -08:00
eval [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
eval_tasks Add version to REST API url (#478) 2024-11-18 22:44:14 -08:00
inference [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
inspect [tests] add client-sdk pytests & delete client.py (#638) 2024-12-16 12:04:56 -08:00
memory Update the "InterleavedTextMedia" type (#635) 2024-12-17 11:18:31 -08:00
memory_banks [tests] add client-sdk pytests & delete client.py (#638) 2024-12-16 12:04:56 -08:00
models [tests] add client-sdk pytests & delete client.py (#638) 2024-12-16 12:04:56 -08:00
post_training [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
safety Update the "InterleavedTextMedia" type (#635) 2024-12-17 11:18:31 -08:00
scoring [rag evals] refactor & add ability to eval retrieval + generation in agentic eval pipeline (#664) 2025-01-02 11:21:33 -08:00
scoring_functions [/scoring] add ability to define aggregation functions for scoring functions & refactors (#597) 2024-12-11 10:03:42 -08:00
shields [tests] add client-sdk pytests & delete client.py (#638) 2024-12-16 12:04:56 -08:00
synthetic_data_generation [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
telemetry Update Telemetry API so OpenAPI generation can work (#640) 2024-12-16 13:00:14 -08:00
tools Tools API with brave and MCP providers (#639) 2024-12-19 21:25:17 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
resource.py Tools API with brave and MCP providers (#639) 2024-12-19 21:25:17 -08:00
version.py Fix the pyopenapi generator avoid potential circular imports 2024-11-18 23:37:52 -08:00