feat: export distribution container build artifacts

Add a new --export-dir flag to the `llama stack build` command that
allows users to export container build artifacts to a specified
directory instead of building the container directly. This feature is
useful for:

- Building containers in different environments
- Sharing build configurations
- Customizing the build process

The exported tarball includes:
- Containerfile (Dockerfile)
- Run configuration file (if building from config)
- External provider files (if specified)
- Build script for assistance

The tarball is named with a timestamp for uniqueness: <distro-name>_<timestamp>.tar.gz

Documentation has been updated in building_distro.md to reflect this new
functionality as well as integration tests.

Signed-off-by: Sébastien Han <seb@redhat.com>
This commit is contained in:
Sébastien Han 2025-05-16 11:37:56 +02:00
parent 047303e339
commit e9bcb0e827
No known key found for this signature in database
7 changed files with 186 additions and 22 deletions

View file

@ -198,3 +198,55 @@ jobs:
'source /etc/os-release && echo "$ID"' \
| grep -qE '^(rhel|ubi)$' \
|| { echo "Base image is not UBI 9!"; exit 1; }
export-build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set up Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.10'
- name: Install uv
uses: astral-sh/setup-uv@0c5e2b8115b80b4c7c5ddf6ffdd634974642d182 # v5.4.1
with:
python-version: "3.10"
- name: Install LlamaStack
run: |
uv venv
source .venv/bin/activate
uv pip install -e .
- name: Pin template to UBI9 base
run: |
yq -i '
.image_type = "container" |
.image_name = "ubi9-test" |
.distribution_spec.container_image = "registry.access.redhat.com/ubi9:latest"
' llama_stack/templates/starter/build.yaml
- name: Test the export
run: |
# Support for USE_COPY_NOT_MOUNT=true LLAMA_STACK_DIR=. will be added in the future
uv run llama stack build --config llama_stack/templates/starter/build.yaml --export-dir export
for file in export/*; do
echo "File: $file"
if [[ "$file" == *.tar.gz ]]; then
echo "Tarball found"
tarball_found=1
tar -xzvf "$file" -C export
else
continue
fi
break
done
if [ -z "$tarball_found" ]; then
echo "Tarball not found"
exit 1
fi
cd export
docker build -t export-test -f ./Containerfile .

View file

@ -53,7 +53,7 @@ The main points to consider are:
```
llama stack build -h
usage: llama stack build [-h] [--config CONFIG] [--template TEMPLATE] [--list-templates] [--image-type {conda,container,venv}] [--image-name IMAGE_NAME] [--print-deps-only] [--run]
usage: llama stack build [-h] [--config CONFIG] [--template TEMPLATE] [--list-templates] [--image-type {conda,container,venv}] [--image-name IMAGE_NAME] [--print-deps-only] [--run] [--export-dir EXPORT_DIR]
Build a Llama stack container
@ -71,6 +71,8 @@ options:
found. (default: None)
--print-deps-only Print the dependencies for the stack only, without building the stack (default: False)
--run Run the stack after building using the same image type, name, and other applicable arguments (default: False)
--export-dir EXPORT_DIR
Export the build artifacts to a specified directory instead of building the container. This will create a tarball containing the Dockerfile and all necessary files to build the container. (default: None)
```
@ -260,6 +262,24 @@ Containerfile created successfully in /tmp/tmp.viA3a3Rdsg/ContainerfileFROM pyth
You can now edit ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml and run `llama stack run ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml`
```
You can also export the build artifacts to a specified directory instead of building the container directly. This is useful when you want to:
- Build the container in a different environment
- Share the build configuration with others
- Customize the build process
To export the build artifacts, use the `--export-dir` flag:
```
llama stack build --config my-build.yaml --image-type container --export-dir ./my-build
```
This will create a tarball in the specified directory containing:
- The Dockerfile (named Containerfile)
- The run configuration file (if building from a config)
- Any external provider files (if specified in the config)
The tarball will be named with a timestamp to ensure uniqueness, for example: `<distro-name>_<timestamp>.tar.gz`
After this step is successful, you should be able to find the built container image and test it with `llama stack run <path/to/run.yaml>`.
:::

View file

@ -230,6 +230,7 @@ def run_stack_build_command(args: argparse.Namespace) -> None:
image_name=image_name,
config_path=args.config,
template_name=args.template,
export_dir=args.export_dir,
)
except (Exception, RuntimeError) as exc:
@ -343,6 +344,7 @@ def _run_stack_build_command_from_build_config(
image_name: str | None = None,
template_name: str | None = None,
config_path: str | None = None,
export_dir: str | None = None,
) -> str:
image_name = image_name or build_config.image_name
if build_config.image_type == LlamaStackImageType.CONTAINER.value:
@ -385,6 +387,7 @@ def _run_stack_build_command_from_build_config(
image_name,
template_or_config=template_name or config_path or str(build_file_path),
run_config=run_config_file,
export_dir=export_dir,
)
if return_code != 0:
raise RuntimeError(f"Failed to build image {image_name}")

View file

@ -5,6 +5,7 @@
# the root directory of this source tree.
import argparse
import textwrap
from pathlib import Path
from llama_stack.cli.stack.utils import ImageType
from llama_stack.cli.subcommand import Subcommand
@ -82,6 +83,13 @@ the build. If not specified, currently active environment will be used if found.
help="Build a config for a list of providers and only those providers. This list is formatted like: api1=provider1,api2=provider2. Where there can be multiple providers per API.",
)
self.parser.add_argument(
"--export-dir",
type=Path,
default=None,
help="Export the build artifacts to a specified directory instead of building the container. This will create a directory containing the Dockerfile and all necessary files to build the container.",
)
def _run_stack_build_command(self, args: argparse.Namespace) -> None:
# always keep implementation completely silo-ed away from CLI so CLI
# can be fast to load and reduces dependencies

View file

@ -93,6 +93,7 @@ def build_image(
image_name: str,
template_or_config: str,
run_config: str | None = None,
export_dir: str | None = None,
):
container_base = build_config.distribution_spec.container_image or "python:3.10-slim"
@ -108,11 +109,18 @@ def build_image(
container_base,
" ".join(normal_deps),
]
if export_dir is not None:
args.append("--export-dir")
args.append(f"{export_dir}")
# When building from a config file (not a template), include the run config path in the
# build arguments
if run_config is not None:
args.append(run_config)
if special_deps:
args.append("--special-pip-deps")
# The content is added after all the image_type conditions
elif build_config.image_type == LlamaStackImageType.CONDA.value:
script = str(importlib.resources.files("llama_stack") / "distribution/build_conda_env.sh")
args = [

View file

@ -26,7 +26,7 @@ BUILD_CONTEXT_DIR=$(pwd)
if [ "$#" -lt 4 ]; then
# This only works for templates
echo "Usage: $0 <template_or_config> <image_name> <container_base> <pip_dependencies> [<run_config>] [<special_pip_deps>]" >&2
echo "Usage: $0 <template_or_config> <image_name> <container_base> <pip_dependencies> [<run_config>] --special-pip-deps <special_pip_deps> --export-dir <export_dir>" >&2
exit 1
fi
set -euo pipefail
@ -43,23 +43,37 @@ shift
# Handle optional arguments
run_config=""
special_pip_deps=""
export_dir=""
# Check if there are more arguments
# The logics is becoming cumbersom, we should refactor it if we can do better
if [ $# -gt 0 ]; then
# Check if the argument ends with .yaml
if [[ "$1" == *.yaml ]]; then
run_config="$1"
shift
# If there's another argument after .yaml, it must be special_pip_deps
if [ $# -gt 0 ]; then
special_pip_deps="$1"
fi
else
# If it's not .yaml, it must be special_pip_deps
special_pip_deps="$1"
fi
fi
# Process remaining arguments
while [[ $# -gt 0 ]]; do
case "$1" in
*.yaml)
run_config="$1"
shift
;;
--export-dir)
if [ -z "${2:-}" ]; then
echo "Error: --export-dir requires a value" >&2
exit 1
fi
export_dir="$2"
shift 2
;;
--special-pip-deps)
if [ -z "${2:-}" ]; then
echo "Error: --special-pip-deps requires a value" >&2
exit 1
fi
special_pip_deps="$2"
shift 2
;;
*)
echo "Unknown argument: $1" >&2
exit 1
;;
esac
done
# Define color codes
RED='\033[0;31m'
@ -83,8 +97,8 @@ add_to_container() {
fi
}
# Check if container command is available
if ! is_command_available $CONTAINER_BINARY; then
# Check if container command is available only if not running in export mode
if ! is_command_available $CONTAINER_BINARY && [ -z "$export_dir" ]; then
printf "${RED}Error: ${CONTAINER_BINARY} command not found. Is ${CONTAINER_BINARY} installed and in your PATH?${NC}" >&2
exit 1
fi
@ -96,7 +110,7 @@ FROM $container_base
WORKDIR /app
# We install the Python 3.11 dev headers and build tools so that any
# Cextension wheels (e.g. polyleven, faisscpu) can compile successfully.
# C-extension wheels (e.g. polyleven, faiss-cpu) can compile successfully.
RUN dnf -y update && dnf install -y iputils git net-tools wget \
vim-minimal python3.11 python3.11-pip python3.11-wheel \
@ -270,6 +284,64 @@ printf "Containerfile created successfully in %s/Containerfile\n\n" "$TEMP_DIR"
cat "$TEMP_DIR"/Containerfile
printf "\n"
create_export_tarball() {
local export_dir="$1"
local image_name="$2"
local run_config="$3"
local external_providers_dir="$4"
local TEMP_DIR="$5"
local BUILD_CONTEXT_DIR="$6"
mkdir -p "$export_dir"
local timestamp=$(date '+%Y-%m-%d_%H-%M-%S')
local tar_name="${image_name//[^a-zA-Z0-9]/_}_${timestamp}.tar.gz"
# If a run config is provided, copy it to the export directory otherwise it's a template build and
# we don't need to copy anything
if [ -n "$run_config" ]; then
mv "$run_config" "$TEMP_DIR"/run.yaml
fi
# Create the archive with all files
echo "Creating tarball with the following files:"
echo "- Containerfile"
# Capture both stdout and stderr from tar command
local tar_cmd="tar -czf \"$export_dir/$tar_name\" -C \"$TEMP_DIR\" Containerfile"
if [ -n "$run_config" ]; then
echo "- run.yaml"
tar_cmd="$tar_cmd -C \"$BUILD_CONTEXT_DIR\" \"$(basename run.yaml)\""
fi
if [ -n "$external_providers_dir" ] && [ -d "$external_providers_dir" ]; then
echo "- providers.d directory"
tar_cmd="$tar_cmd -C \"$BUILD_CONTEXT_DIR\" providers.d"
fi
local tar_output=$(eval "$tar_cmd" 2>&1)
local tar_status=$?
if [ $tar_status -ne 0 ]; then
echo "ERROR: Failed to create tarball" >&2
echo "Tar command output:" >&2
echo "$tar_output" >&2
return 1
fi
rm -rf providers.d run.yaml
echo "Build artifacts tarball created: $export_dir/$tar_name"
return 0
}
# If export_dir is specified, copy all necessary files and exit
if [ -n "$export_dir" ]; then
if ! create_export_tarball "$export_dir" "$image_name" "$run_config" "$external_providers_dir" "$TEMP_DIR" "$BUILD_CONTEXT_DIR"; then
exit 1
fi
exit 0
fi
# Start building the CLI arguments
CLI_ARGS=()

View file

@ -16,9 +16,10 @@ from llama_stack.distribution.utils.image_types import LlamaStackImageType
def test_container_build_passes_path(monkeypatch, tmp_path):
called_with = {}
def spy_build_image(cfg, build_file_path, image_name, template_or_config, run_config=None):
def spy_build_image(cfg, build_file_path, image_name, template_or_config, run_config=None, export_dir=None):
called_with["path"] = template_or_config
called_with["run_config"] = run_config
called_with["export_dir"] = export_dir
return 0
monkeypatch.setattr(