ai-lc4j-demos/demo-02/README.md

629 B
Raw Blame History

Demo 02 - LLM configuration

In this step, we will play with various configurations of the language model (LLM)

Temperature

quarkus.langchain4j.openai.chat-model.temperature controls the randomness of the models responses. Lowering the temperature will make the model more conservative, while increasing it will make it more creative.

Max tokens

quarkus.langchain4j.openai.chat-model.max-tokens limits the length of the response.

Frequency penalty

quarkus.langchain4j.openai.chat-model.frequency-penalty defines how much the model should avoid repeating itself.