forked from phoenix/litellm-mirror
update config
This commit is contained in:
parent
cf5060f136
commit
9c97e1513c
1 changed files with 2 additions and 0 deletions
|
@ -40,6 +40,8 @@ response = completion_with_config(model="gpt-3.5-turbo", messages=messages, conf
|
|||
|
||||
You can also use model configs to automatically select a model based on the prompt size. It checks the number of tokens in the prompt and max tokens for each model. It selects the model with max tokens > prompt tokens.
|
||||
|
||||
If the prompt is larger than any available model, it'll automatically trim the prompt (from the middle + preserve any system message), and fit it to the largest model available.
|
||||
|
||||
```python
|
||||
from litellm import completion_with_config
|
||||
import os
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue