forked from phoenix-oss/llama-stack-mirror
docs: fix model name (#1926)
# What does this PR do? Use llama3.2:3b for consistency. Signed-off-by: Sébastien Han <seb@redhat.com>
This commit is contained in:
parent
1be66d754e
commit
1f2df59ece
1 changed files with 3 additions and 3 deletions
|
@ -9,10 +9,10 @@ In this guide, we'll walk through how to build a RAG agent locally using Llama S
|
||||||
### 1. Download a Llama model with Ollama
|
### 1. Download a Llama model with Ollama
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ollama pull llama3.2:3b-instruct-fp16
|
ollama pull llama3.2:3b
|
||||||
```
|
```
|
||||||
|
|
||||||
This will instruct the Ollama service to download the Llama 3.2 3B Instruct model, which we'll use in the rest of this guide.
|
This will instruct the Ollama service to download the Llama 3.2 3B model, which we'll use in the rest of this guide.
|
||||||
|
|
||||||
```{admonition} Note
|
```{admonition} Note
|
||||||
:class: tip
|
:class: tip
|
||||||
|
@ -176,7 +176,7 @@ python inference.py
|
||||||
```
|
```
|
||||||
Sample output:
|
Sample output:
|
||||||
```
|
```
|
||||||
Model: llama3.2:3b-instruct-fp16
|
Model: llama3.2:3b
|
||||||
Here is a haiku about coding:
|
Here is a haiku about coding:
|
||||||
|
|
||||||
Lines of code unfold
|
Lines of code unfold
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue