forked from phoenix/litellm-mirror
(docs) input params
This commit is contained in:
parent
985583023a
commit
fdbaceab8e
1 changed files with 1 additions and 1 deletions
|
@ -135,7 +135,7 @@ def completion(
|
||||||
|
|
||||||
- `response_format`: *object (optional)* - An object specifying the format that the model must output.
|
- `response_format`: *object (optional)* - An object specifying the format that the model must output.
|
||||||
|
|
||||||
- Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.
|
- Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
|
||||||
|
|
||||||
- Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
|
- Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
|
||||||
|
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue