ai-lc4j-demos/demo-06
2025-03-28 17:53:29 +01:00
..
src/main chore: fix model URLs 2025-03-28 17:53:29 +01:00
pom.xml chore: Add README.md files 2025-03-28 17:36:22 +01:00
README.md chore: Add README.md files 2025-03-28 17:36:22 +01:00

Demo 06 - RAG Part 2

Retrieval Augmented Generation (RAG) is a way to extend the knowledge of the LLM used in the AI service.

The RAG pattern is composed of two parts:

  • Ingestion: This is the part that stores data in the knowledge base.
  • Augmentation: This is the part that adds the retrieved information to the input of the LLM.

Embedding model

One of the core components of the RAG pattern is the embedding model. The embedding model is used to transform the text into numerical vectors. These vectors are used to compare the text and find the most relevant segments.

Vector store

In the previous step, we used an in memory store. Now we will use a persistent store to keep the embeddings between restarts.

Ingesting documents into the vector store

While you are editing the src/main/resources/application.properties file, add the following configuration:

rag.location=src/main/resources/rag