forked from phoenix/litellm-mirror
(docs) litellm eval suites
This commit is contained in:
parent
108336aeef
commit
7ddd29da93
1 changed files with 2 additions and 2 deletions
|
@ -7,7 +7,7 @@ import TabItem from '@theme/TabItem';
|
|||
## Using LiteLLM with ML Flow
|
||||
MLflow provides an API `mlflow.evaluate()` to help evaluate your LLMs https://mlflow.org/docs/latest/llms/llm-evaluate/index.html
|
||||
|
||||
## Pre Requisites
|
||||
### Pre Requisites
|
||||
```shell
|
||||
pip install litellm
|
||||
```
|
||||
|
@ -224,7 +224,7 @@ See evaluation table below:
|
|||
AutoEvals is a tool for quickly and easily evaluating AI model outputs using best practices.
|
||||
https://github.com/braintrustdata/autoevals
|
||||
|
||||
## Pre Requisites
|
||||
### Pre Requisites
|
||||
```shell
|
||||
pip install litellm
|
||||
```
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue