mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
docs: fix model name (#1926)
# What does this PR do? Use llama3.2:3b for consistency. Signed-off-by: Sébastien Han <seb@redhat.com>
This commit is contained in:
parent
1be66d754e
commit
1f2df59ece
1 changed files with 3 additions and 3 deletions
|
@ -9,10 +9,10 @@ In this guide, we'll walk through how to build a RAG agent locally using Llama S
|
|||
### 1. Download a Llama model with Ollama
|
||||
|
||||
```bash
|
||||
ollama pull llama3.2:3b-instruct-fp16
|
||||
ollama pull llama3.2:3b
|
||||
```
|
||||
|
||||
This will instruct the Ollama service to download the Llama 3.2 3B Instruct model, which we'll use in the rest of this guide.
|
||||
This will instruct the Ollama service to download the Llama 3.2 3B model, which we'll use in the rest of this guide.
|
||||
|
||||
```{admonition} Note
|
||||
:class: tip
|
||||
|
@ -176,7 +176,7 @@ python inference.py
|
|||
```
|
||||
Sample output:
|
||||
```
|
||||
Model: llama3.2:3b-instruct-fp16
|
||||
Model: llama3.2:3b
|
||||
Here is a haiku about coding:
|
||||
|
||||
Lines of code unfold
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue