mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-24 18:24:20 +00:00
(docs) eval suites
This commit is contained in:
parent
39b5b03ac3
commit
3280a8e2e2
1 changed files with 3 additions and 0 deletions
|
@ -234,6 +234,9 @@ pip install autoevals
|
|||
|
||||
### Quick Start
|
||||
In this code sample we use the `Factuality()` evaluator from `autoevals.llm` to test whether an output is factual, compared to an original (expected) value.
|
||||
|
||||
**Autoevals uses gpt-3.5-turbo / gpt-4-turbo by default to evaluate responses**
|
||||
|
||||
See autoevals docs on the [supported evaluators](https://www.braintrustdata.com/docs/autoevals/python#autoevalsllm) - Translation, Summary, Security Evaluators etc
|
||||
|
||||
```python
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue