forked from phoenix/litellm-mirror
(docs) add logprobs, top_logprobs
This commit is contained in:
parent
f3844b309f
commit
112ffa3596
1 changed files with 5 additions and 1 deletions
|
@ -120,7 +120,7 @@ def completion(
|
|||
|
||||
## Optional Fields
|
||||
|
||||
`temperature`: *number or null (optional)* - The sampling temperature to be used, between 0 and 2. Higher values like 0.8 produce more random outputs, while lower values like 0.2 make outputs more focused and deterministic.
|
||||
- `temperature`: *number or null (optional)* - The sampling temperature to be used, between 0 and 2. Higher values like 0.8 produce more random outputs, while lower values like 0.2 make outputs more focused and deterministic.
|
||||
|
||||
- `top_p`: *number or null (optional)* - An alternative to sampling with temperature. It instructs the model to consider the results of the tokens with top_p probability. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered.
|
||||
|
||||
|
@ -160,6 +160,10 @@ def completion(
|
|||
|
||||
- `timeout`: *int (optional)* - Timeout in seconds for completion requests (Defaults to 600 seconds)
|
||||
|
||||
- `logprobs`: * bool (optional)* - Whether to return log probabilities of the output tokens or not. If true returns the log probabilities of each output token returned in the content of message
|
||||
|
||||
- `top_logprobs`: *int (optional)* - An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to true if this parameter is used.
|
||||
|
||||
#### Deprecated Params
|
||||
- `functions`: *array* - A list of functions that the model may use to generate JSON inputs. Each function should have the following properties:
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue