llama-stack-mirror/llama_stack/providers/inline/eval/meta_reference
Matthew Farrellee 7c888fc0da
feat: update eval runner to use openai endpoints (#3588)
# What does this PR do?

move the eval=inline::meta-reference implementation to use
openai_completion/openai_chat_completion

note: this breaks backward compatibility if eval setup used sampling
params' repetition_penalty or strategy

## Test Plan

ci w/ new recordings

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-09-29 13:13:53 -07:00
..
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
config.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
eval.py feat: update eval runner to use openai endpoints (#3588) 2025-09-29 13:13:53 -07:00