From 112ffa35963e9effbaf475b61fc6bb1d4a883ad6 Mon Sep 17 00:00:00 2001 From: ishaan-jaff Date: Tue, 26 Dec 2023 14:53:30 +0530 Subject: [PATCH] (docs) add logprobs, top_logprobs --- docs/my-website/docs/completion/input.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/docs/my-website/docs/completion/input.md b/docs/my-website/docs/completion/input.md index 17790a5aa..676e4d232 100644 --- a/docs/my-website/docs/completion/input.md +++ b/docs/my-website/docs/completion/input.md @@ -120,7 +120,7 @@ def completion( ## Optional Fields -`temperature`: *number or null (optional)* - The sampling temperature to be used, between 0 and 2. Higher values like 0.8 produce more random outputs, while lower values like 0.2 make outputs more focused and deterministic. +- `temperature`: *number or null (optional)* - The sampling temperature to be used, between 0 and 2. Higher values like 0.8 produce more random outputs, while lower values like 0.2 make outputs more focused and deterministic. - `top_p`: *number or null (optional)* - An alternative to sampling with temperature. It instructs the model to consider the results of the tokens with top_p probability. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. @@ -160,6 +160,10 @@ def completion( - `timeout`: *int (optional)* - Timeout in seconds for completion requests (Defaults to 600 seconds) +- `logprobs`: * bool (optional)* - Whether to return log probabilities of the output tokens or not. If true returns the log probabilities of each output token returned in the content of message + +- `top_logprobs`: *int (optional)* - An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to true if this parameter is used. + #### Deprecated Params - `functions`: *array* - A list of functions that the model may use to generate JSON inputs. Each function should have the following properties: