mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-03 17:29:01 +00:00
update start-stack.sh with missing color and if statment logic
Add conda image name
This commit is contained in:
parent
2fcb70b789
commit
dae8fd0a36
2 changed files with 5 additions and 4 deletions
|
@ -69,7 +69,7 @@ which defines the providers and their settings.
|
||||||
Now let's build and run the Llama Stack config for Ollama.
|
Now let's build and run the Llama Stack config for Ollama.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
INFERENCE_MODEL=llama3.2:3b llama stack build --template ollama --image-type conda --run
|
INFERENCE_MODEL=llama3.2:3b llama stack build --template ollama --image-type conda --image-name llama3-3b-conda --run
|
||||||
```
|
```
|
||||||
:::
|
:::
|
||||||
:::{tab-item} Using a Container
|
:::{tab-item} Using a Container
|
||||||
|
@ -77,10 +77,9 @@ You can use a container image to run the Llama Stack server. We provide several
|
||||||
component that works with different inference providers out of the box. For this guide, we will use
|
component that works with different inference providers out of the box. For this guide, we will use
|
||||||
`llamastack/distribution-ollama` as the container image. If you'd like to build your own image or customize the
|
`llamastack/distribution-ollama` as the container image. If you'd like to build your own image or customize the
|
||||||
configurations, please check out [this guide](../references/index.md).
|
configurations, please check out [this guide](../references/index.md).
|
||||||
|
|
||||||
First lets setup some environment variables and create a local directory to mount into the container’s file system.
|
First lets setup some environment variables and create a local directory to mount into the container’s file system.
|
||||||
```bash
|
```bash
|
||||||
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
|
export INFERENCE_MODEL="llama3.2:3b"
|
||||||
export LLAMA_STACK_PORT=8321
|
export LLAMA_STACK_PORT=8321
|
||||||
mkdir -p ~/.llama
|
mkdir -p ~/.llama
|
||||||
```
|
```
|
||||||
|
@ -227,6 +226,7 @@ ChatCompletionResponse(
|
||||||
Alternatively, you can run inference using the Llama Stack client SDK.
|
Alternatively, you can run inference using the Llama Stack client SDK.
|
||||||
|
|
||||||
### i. Create the Script
|
### i. Create the Script
|
||||||
|
|
||||||
Create a file `inference.py` and add the following code:
|
Create a file `inference.py` and add the following code:
|
||||||
```python
|
```python
|
||||||
from llama_stack_client import LlamaStackClient
|
from llama_stack_client import LlamaStackClient
|
||||||
|
|
|
@ -18,6 +18,7 @@ VIRTUAL_ENV=${VIRTUAL_ENV:-}
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
RED='\033[0;31m'
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
NC='\033[0m' # No Color
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
error_handler() {
|
error_handler() {
|
||||||
|
@ -73,7 +74,7 @@ done
|
||||||
PYTHON_BINARY="python"
|
PYTHON_BINARY="python"
|
||||||
case "$env_type" in
|
case "$env_type" in
|
||||||
"venv")
|
"venv")
|
||||||
if [ -n "$VIRTUAL_ENV" && "$VIRTUAL_ENV" == "$env_path_or_name" ]; then
|
if [ -n "$VIRTUAL_ENV" ] && [ "$VIRTUAL_ENV" == "$env_path_or_name" ]; then
|
||||||
echo -e "${GREEN}Virtual environment already activated${NC}" >&2
|
echo -e "${GREEN}Virtual environment already activated${NC}" >&2
|
||||||
else
|
else
|
||||||
# Activate virtual environment
|
# Activate virtual environment
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue