llama-stack-mirror/llama_stack/providers
Xi Yan d3508c4c76
feat(1/n): scoring function registration for llm-as-judge (#1405)
# What does this PR do?

- add ability to register a llm-as-judge scoring function with custom
judge prompts / params.
- Closes https://github.com/meta-llama/llama-stack/issues/1395

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
**Via CLI**
```
llama-stack-client scoring_functions register \ 
--scoring-fn-id "llm-as-judge::my-prompt" \
--description "my custom judge" \
--return-type '{"type": "string"}' \
--provider-id "llm-as-judge" \
--provider-scoring-fn-id "my-prompt" \
--params '{"type": "llm_as_judge", "judge_model": "meta-llama/Llama-3.2-3B-Instruct", "prompt_template": "always output 1.0"}'
```

<img width="1373" alt="image"
src="https://github.com/user-attachments/assets/7c6fc0ae-64fe-4581-8927-a9d8d746bd72"
/>

- Unit test will be addressed with
https://github.com/meta-llama/llama-stack/issues/1396


[//]: # (## Documentation)
2025-03-05 10:00:34 -08:00
..
inline feat(1/n): scoring function registration for llm-as-judge (#1405) 2025-03-05 10:00:34 -08:00
registry fix: groq now depends on litellm 2025-02-27 14:07:12 -08:00
remote fix(pgvector): replace hyphens with underscores in table names (#1385) 2025-03-04 07:06:35 -08:00
tests chore: Make README code blocks more easily copy pastable (#1420) 2025-03-05 09:11:01 -08:00
utils refactor(test): unify vector_io tests and make them configurable (#1398) 2025-03-04 13:37:45 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00