From a1fb23b2684dddc85dacda52104707daa115846c Mon Sep 17 00:00:00 2001 From: Paul McCarthy Date: Fri, 17 Jan 2025 14:31:57 +0000 Subject: [PATCH] Fixing small typo in quick start guide --- docs/source/getting_started/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index d7c3fe9e5..602b5a635 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -1,6 +1,6 @@ # Quick Start -In this guide, we'll through how you can use the Llama Stack client SDK to build a simple RAG agent. +In this guide, we'll walk through how you can use the Llama Stack client SDK to build a simple RAG agent. The most critical requirement for running the agent is running inference on the underlying Llama model. Depending on what hardware (GPUs) you have available, you have various options. We will use `Ollama` for this purpose as it is the easiest to get started with and yet robust.