mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
Fixing small typo in quick start guide (#807)
# What does this PR do? Fixing small typo in the quick start guide ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
This commit is contained in:
parent
53b5f6b24a
commit
e1decaec9d
1 changed files with 1 additions and 1 deletions
|
@ -1,6 +1,6 @@
|
|||
# Quick Start
|
||||
|
||||
In this guide, we'll through how you can use the Llama Stack client SDK to build a simple RAG agent.
|
||||
In this guide, we'll walk through how you can use the Llama Stack client SDK to build a simple RAG agent.
|
||||
|
||||
The most critical requirement for running the agent is running inference on the underlying Llama model. Depending on what hardware (GPUs) you have available, you have various options. We will use `Ollama` for this purpose as it is the easiest to get started with and yet robust.
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue