From fdbaceab8e91c469581c6025d3662c1b220d87b4 Mon Sep 17 00:00:00 2001 From: ishaan-jaff Date: Fri, 17 Nov 2023 16:07:23 -0800 Subject: [PATCH] (docs) input params --- docs/my-website/docs/completion/input.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/my-website/docs/completion/input.md b/docs/my-website/docs/completion/input.md index ae397f6e0..8e19082de 100644 --- a/docs/my-website/docs/completion/input.md +++ b/docs/my-website/docs/completion/input.md @@ -135,7 +135,7 @@ def completion( - `response_format`: *object (optional)* - An object specifying the format that the model must output. - - Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON. + - Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON. - Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.