14 lines
No EOL
629 B
Markdown
14 lines
No EOL
629 B
Markdown
Demo 02 - LLM configuration
|
||
===============================================
|
||
|
||
In this step, we will play with various configurations of the language model (LLM)
|
||
|
||
# Temperature
|
||
`quarkus.langchain4j.openai.chat-model.temperature` controls the randomness of the model’s responses.
|
||
Lowering the temperature will make the model more conservative, while increasing it will make it more creative.
|
||
|
||
# Max tokens
|
||
`quarkus.langchain4j.openai.chat-model.max-tokens` limits the length of the response.
|
||
|
||
# Frequency penalty
|
||
`quarkus.langchain4j.openai.chat-model.frequency-penalty` defines how much the model should avoid repeating itself. |