forked from phoenix-oss/llama-stack-mirror
* wip * scoring fn api * eval api * eval task * evaluate api update * pre commit * unwrap context -> config * config field doc * typo * naming fix * separate benchmark / app eval * api name * rename * wip tests * wip * datasetio test * delete unused * fixture * scoring resolve * fix scoring register * scoring test pass * score batch * scoring fix * fix eval * test eval works * huggingface provider * datasetdef files * mmlu scoring fn * test wip * remove type ignore * api refactor * add default task_eval_id for routing * add eval_id for jobs * remove type ignore * huggingface provider * wip huggingface register * only keep 1 run_eval * fix optional * register task required * register task required * delete old tests * fix * mmlu loose * refactor * msg * fix tests * move benchmark task def to file * msg * gen openapi * openapi gen * move dataset to hf llamastack repo * remove todo * refactor * add register model to unit test * rename * register to client * delete preregistered dataset/eval task * comments * huggingface -> remote adapter * openapi gen |
||
---|---|---|
.. | ||
agents | ||
batch_inference | ||
common | ||
datasetio | ||
datasets | ||
eval | ||
eval_tasks | ||
inference | ||
inspect | ||
memory | ||
memory_banks | ||
models | ||
post_training | ||
safety | ||
scoring | ||
scoring_functions | ||
shields | ||
synthetic_data_generation | ||
telemetry | ||
__init__.py | ||
resource.py |