mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-09 19:58:29 +00:00
Correct typos in Zero to Hero guide
This commit is contained in:
parent
dd1265bea7
commit
a4415f5657
1 changed files with 3 additions and 3 deletions
|
@ -45,7 +45,7 @@ If you're looking for more specific topics, we have a [Zero to Hero Guide](#next
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Install Dependencies and Set Up Environmen
|
## Install Dependencies and Set Up Environment
|
||||||
|
|
||||||
1. **Create a Conda Environment**:
|
1. **Create a Conda Environment**:
|
||||||
Create a new Conda environment with Python 3.10:
|
Create a new Conda environment with Python 3.10:
|
||||||
|
@ -110,7 +110,7 @@ If you're looking for more specific topics, we have a [Zero to Hero Guide](#next
|
||||||
--env SAFETY_MODEL=$SAFETY_MODEL
|
--env SAFETY_MODEL=$SAFETY_MODEL
|
||||||
--env OLLAMA_URL=$OLLAMA_URL
|
--env OLLAMA_URL=$OLLAMA_URL
|
||||||
```
|
```
|
||||||
Note: Everytime you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model.
|
Note: Every time you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model.
|
||||||
|
|
||||||
The server will start and listen on `http://localhost:5001`.
|
The server will start and listen on `http://localhost:5001`.
|
||||||
|
|
||||||
|
@ -191,7 +191,7 @@ You can check the available models with the command `llama-stack-client models l
|
||||||
|
|
||||||
You can also interact with the Llama Stack server using a simple Python script. Below is an example:
|
You can also interact with the Llama Stack server using a simple Python script. Below is an example:
|
||||||
|
|
||||||
### 1. Activate Conda Environmen
|
### 1. Activate Conda Environment
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
conda activate ollama
|
conda activate ollama
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue