feat: add --run to llama stack build (#1156)

# What does this PR do?

--run runs the stack that was just build using the same arguments during
the build process (image-name, type, etc)

This simplifies the workflow a lot and makes the UX better for most
local users trying to get started rather than having to match the flags
of the two commands (build and then run)

Also, moved `ImageType` to distribution.utils since there were circular
import errors with its old location

## Test Plan

tested locally using the following command: 

`llama stack build --run --template ollama --image-type venv`

Signed-off-by: Charlie Doern <cdoern@redhat.com>
This commit is contained in:
Charlie Doern 2025-02-23 22:06:09 -05:00 committed by GitHub
parent 6227e1e3b9
commit 34e3faa4e8
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 129 additions and 87 deletions

View file

@ -7,7 +7,6 @@
import importlib.resources
import logging
import sys
from enum import Enum
from pathlib import Path
from typing import Dict, List
@ -18,6 +17,7 @@ from llama_stack.distribution.datatypes import BuildConfig, Provider
from llama_stack.distribution.distribution import get_provider_registry
from llama_stack.distribution.utils.config_dirs import BUILDS_BASE_DIR
from llama_stack.distribution.utils.exec import run_command, run_with_pty
from llama_stack.distribution.utils.image_types import ImageType
from llama_stack.providers.datatypes import Api
log = logging.getLogger(__name__)
@ -33,12 +33,6 @@ SERVER_DEPENDENCIES = [
]
class ImageType(Enum):
container = "container"
conda = "conda"
venv = "venv"
class ApiInput(BaseModel):
api: Api
provider: str