docs(lm_evaluation_harness.md): tutorial showing how to use lm evaluation harness with tgi

This commit is contained in:
Krrish Dholakia 2023-11-01 17:45:16 -07:00
parent b305492a0b
commit 9afd3c8bfa
2 changed files with 25 additions and 0 deletions

View file

@ -0,0 +1,24 @@
# LM-Evaluation Harness with TGI
Evaluate LLMs 20x faster with TGI via litellm proxy's `/completions` endpoint.
**Step 1: Start the local proxy**
```shell
$ litellm --model huggingface/bigcode/starcoder
```
OpenAI Compatible Endpoint at http://0.0.0.0:8000
**Step 2: Set OpenAI API Base**
```shell
$ export OPENAI_API_BASE="http://0.0.0.0:8000"
```
**Step 3: Run LM-Eval-Harness**
```shell
$ python3 main.py \
--model gpt3 \
--model_args engine=huggingface/bigcode/starcoder \
--tasks hellaswag
```

View file

@ -98,6 +98,7 @@ const sidebars = {
'tutorials/oobabooga',
"tutorials/gradio_integration",
"tutorials/model_config_proxy",
"tutorials/lm_evaluation_harness",
'tutorials/huggingface_codellama',
'tutorials/huggingface_tutorial',
'tutorials/TogetherAI_liteLLM',