ai-lc4j-demos/demo-02
2025-03-28 17:53:29 +01:00
..
src/main chore: fix model URLs 2025-03-28 17:53:29 +01:00
pom.xml chore: Add README.md files 2025-03-28 17:36:22 +01:00
README.md chore: Add README.md files 2025-03-28 17:36:22 +01:00

Demo 02 - LLM configuration

In this step, we will play with various configurations of the language model (LLM)

Temperature

quarkus.langchain4j.openai.chat-model.temperature controls the randomness of the models responses. Lowering the temperature will make the model more conservative, while increasing it will make it more creative.

Max tokens

quarkus.langchain4j.openai.chat-model.max-tokens limits the length of the response.

Frequency penalty

quarkus.langchain4j.openai.chat-model.frequency-penalty defines how much the model should avoid repeating itself.