forked from phoenix/litellm-mirror
docs(lm_evaluation_harness.md): tutorial showing how to use lm evaluation harness with tgi
This commit is contained in:
parent
b305492a0b
commit
9afd3c8bfa
2 changed files with 25 additions and 0 deletions
24
docs/my-website/docs/tutorials/lm_evaluation_harness.md
Normal file
24
docs/my-website/docs/tutorials/lm_evaluation_harness.md
Normal file
|
@ -0,0 +1,24 @@
|
|||
# LM-Evaluation Harness with TGI
|
||||
|
||||
Evaluate LLMs 20x faster with TGI via litellm proxy's `/completions` endpoint.
|
||||
|
||||
**Step 1: Start the local proxy**
|
||||
```shell
|
||||
$ litellm --model huggingface/bigcode/starcoder
|
||||
```
|
||||
|
||||
OpenAI Compatible Endpoint at http://0.0.0.0:8000
|
||||
|
||||
**Step 2: Set OpenAI API Base**
|
||||
```shell
|
||||
$ export OPENAI_API_BASE="http://0.0.0.0:8000"
|
||||
```
|
||||
|
||||
**Step 3: Run LM-Eval-Harness**
|
||||
|
||||
```shell
|
||||
$ python3 main.py \
|
||||
--model gpt3 \
|
||||
--model_args engine=huggingface/bigcode/starcoder \
|
||||
--tasks hellaswag
|
||||
```
|
|
@ -98,6 +98,7 @@ const sidebars = {
|
|||
'tutorials/oobabooga',
|
||||
"tutorials/gradio_integration",
|
||||
"tutorials/model_config_proxy",
|
||||
"tutorials/lm_evaluation_harness",
|
||||
'tutorials/huggingface_codellama',
|
||||
'tutorials/huggingface_tutorial',
|
||||
'tutorials/TogetherAI_liteLLM',
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue