(docs) lm eval harness

This commit is contained in:
ishaan-jaff 2023-11-03 18:11:07 -07:00
parent 49650af444
commit 53371d37b7

View file

@ -2,7 +2,7 @@
Evaluate LLMs 20x faster with TGI via litellm proxy's `/completions` endpoint.
This tutorial assumes you're using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
This tutorial assumes you're using the `big-refactor` branch of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor)
**Step 1: Start the local proxy**
```shell
@ -19,8 +19,9 @@ $ export OPENAI_API_BASE="http://0.0.0.0:8000"
**Step 3: Run LM-Eval-Harness**
```shell
$ python3 main.py \
--model gpt3 \
--model_args engine=huggingface/bigcode/starcoder \
--tasks hellaswag
python3 -m lm_eval \
--model openai-completions \
--model_args engine=davinci \
--task crows_pairs_english_age
```