mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-24 18:24:20 +00:00
(docs) lm eval harness
This commit is contained in:
parent
49650af444
commit
53371d37b7
1 changed files with 6 additions and 5 deletions
|
@ -2,7 +2,7 @@
|
|||
|
||||
Evaluate LLMs 20x faster with TGI via litellm proxy's `/completions` endpoint.
|
||||
|
||||
This tutorial assumes you're using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
|
||||
This tutorial assumes you're using the `big-refactor` branch of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor)
|
||||
|
||||
**Step 1: Start the local proxy**
|
||||
```shell
|
||||
|
@ -19,8 +19,9 @@ $ export OPENAI_API_BASE="http://0.0.0.0:8000"
|
|||
**Step 3: Run LM-Eval-Harness**
|
||||
|
||||
```shell
|
||||
$ python3 main.py \
|
||||
--model gpt3 \
|
||||
--model_args engine=huggingface/bigcode/starcoder \
|
||||
--tasks hellaswag
|
||||
python3 -m lm_eval \
|
||||
--model openai-completions \
|
||||
--model_args engine=davinci \
|
||||
--task crows_pairs_english_age
|
||||
|
||||
```
|
Loading…
Add table
Add a link
Reference in a new issue