Some leftovers

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
This commit is contained in:
Yuan Tang 2025-01-17 12:16:04 -05:00
parent 31501c0c7e
commit adfa2c3413
No known key found for this signature in database
8 changed files with 46 additions and 46 deletions

View file

@ -17,13 +17,13 @@ pip install -e .
llama stack build -h llama stack build -h
``` ```
We will start build our distribution (in the form of a Conda environment, or Docker image). In this step, we will specify: We will start build our distribution (in the form of a Conda environment, or Container image). In this step, we will specify:
- `name`: the name for our distribution (e.g. `my-stack`) - `name`: the name for our distribution (e.g. `my-stack`)
- `image_type`: our build image type (`conda | container`) - `image_type`: our build image type (`conda | container`)
- `distribution_spec`: our distribution specs for specifying API providers - `distribution_spec`: our distribution specs for specifying API providers
- `description`: a short description of the configurations for the distribution - `description`: a short description of the configurations for the distribution
- `providers`: specifies the underlying implementation for serving each API endpoint - `providers`: specifies the underlying implementation for serving each API endpoint
- `image_type`: `conda` | `container` to specify whether to build the distribution in the form of Docker image or Conda environment. - `image_type`: `conda` | `container` to specify whether to build the distribution in the form of Container image or Conda environment.
After this step is complete, a file named `<name>-build.yaml` and template file `<name>-run.yaml` will be generated and saved at the output file path specified at the end of the command. After this step is complete, a file named `<name>-build.yaml` and template file `<name>-run.yaml` will be generated and saved at the output file path specified at the end of the command.
@ -35,7 +35,7 @@ After this step is complete, a file named `<name>-build.yaml` and template file
llama stack build llama stack build
> Enter a name for your Llama Stack (e.g. my-local-stack): my-stack > Enter a name for your Llama Stack (e.g. my-local-stack): my-stack
> Enter the image type you want your Llama Stack to be built as (docker or conda): conda > Enter the image type you want your Llama Stack to be built as (container or conda): conda
Llama Stack is composed of several APIs working together. Let's select Llama Stack is composed of several APIs working together. Let's select
the provider types (implementations) you want to use for these APIs. the provider types (implementations) you want to use for these APIs.
@ -348,26 +348,26 @@ llama stack build --config llama_stack/templates/ollama/build.yaml
``` ```
::: :::
:::{tab-item} Building Docker :::{tab-item} Building Container
> [!TIP] > [!TIP]
> Podman is supported as an alternative to Docker. Set `DOCKER_BINARY` to `podman` in your environment to use Podman. > Podman is supported as an alternative to Docker. Set `CONTAINER_BINARY` to `podman` in your environment to use Podman.
To build a docker image, you may start off from a template and use the `--image-type docker` flag to specify `docker` as the build image type. To build a container image, you may start off from a template and use the `--image-type container` flag to specify `container` as the build image type.
``` ```
llama stack build --template ollama --image-type docker llama stack build --template ollama --image-type container
``` ```
``` ```
$ llama stack build --template ollama --image-type docker $ llama stack build --template ollama --image-type container
... ...
Dockerfile created successfully in /tmp/tmp.viA3a3Rdsg/DockerfileFROM python:3.10-slim Containerfile created successfully in /tmp/tmp.viA3a3Rdsg/ContainerfileFROM python:3.10-slim
... ...
You can now edit ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml and run `llama stack run ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml` You can now edit ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml and run `llama stack run ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml`
``` ```
After this step is successful, you should be able to find the built docker image and test it with `llama stack run <path/to/run.yaml>`. After this step is successful, you should be able to find the built container image and test it with `llama stack run <path/to/run.yaml>`.
::: :::
:::: ::::

View file

@ -27,7 +27,7 @@ class StackConfigure(Subcommand):
self.parser.add_argument( self.parser.add_argument(
"config", "config",
type=str, type=str,
help="Path to the build config file (e.g. ~/.llama/builds/<image_type>/<name>-build.yaml). For docker, this could also be the name of the docker image. ", help="Path to the build config file (e.g. ~/.llama/builds/<image_type>/<name>-build.yaml). For container, this could also be the name of the container image. ",
) )
self.parser.add_argument( self.parser.add_argument(

View file

@ -92,7 +92,7 @@ class StackRun(Subcommand):
) )
if not config_file.exists() and not has_yaml_suffix: if not config_file.exists() and not has_yaml_suffix:
# check if it's a build config saved to docker dir # check if it's a build config saved to container dir
config_file = Path( config_file = Path(
BUILDS_BASE_DIR / ImageType.container.value / f"{args.config}-run.yaml" BUILDS_BASE_DIR / ImageType.container.value / f"{args.config}-run.yaml"
) )

View file

@ -78,7 +78,7 @@ def get_provider_dependencies(
provider_spec = providers_for_api[provider_type] provider_spec = providers_for_api[provider_type]
deps.extend(provider_spec.pip_packages) deps.extend(provider_spec.pip_packages)
if provider_spec.container_image: if provider_spec.container_image:
raise ValueError("A stack's dependencies cannot have a docker image") raise ValueError("A stack's dependencies cannot have a container image")
normal_deps = [] normal_deps = []
special_deps = [] special_deps = []

View file

@ -13,7 +13,7 @@ PYPI_VERSION=${PYPI_VERSION:-}
BUILD_PLATFORM=${BUILD_PLATFORM:-} BUILD_PLATFORM=${BUILD_PLATFORM:-}
if [ "$#" -lt 4 ]; then if [ "$#" -lt 4 ]; then
echo "Usage: $0 <build_name> <docker_base> <pip_dependencies> [<special_pip_deps>]" >&2 echo "Usage: $0 <build_name> <container_base> <pip_dependencies> [<special_pip_deps>]" >&2
echo "Example: $0 my-fastapi-app python:3.9-slim 'fastapi uvicorn' " >&2 echo "Example: $0 my-fastapi-app python:3.9-slim 'fastapi uvicorn' " >&2
exit 1 exit 1
fi fi
@ -24,7 +24,7 @@ set -euo pipefail
build_name="$1" build_name="$1"
image_name="distribution-$build_name" image_name="distribution-$build_name"
docker_base=$2 container_base=$2
build_file_path=$3 build_file_path=$3
host_build_dir=$4 host_build_dir=$4
pip_dependencies=$5 pip_dependencies=$5
@ -36,14 +36,14 @@ NC='\033[0m' # No Color
SCRIPT_DIR=$(dirname "$(readlink -f "$0")") SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
REPO_DIR=$(dirname $(dirname "$SCRIPT_DIR")) REPO_DIR=$(dirname $(dirname "$SCRIPT_DIR"))
DOCKER_BINARY=${DOCKER_BINARY:-docker} CONTAINER_BINARY=${CONTAINER_BINARY:-docker}
DOCKER_OPTS=${DOCKER_OPTS:-} CONTAINER_OPTS=${CONTAINER_OPTS:-}
TEMP_DIR=$(mktemp -d) TEMP_DIR=$(mktemp -d)
add_to_docker() { add_to_container() {
local input local input
output_file="$TEMP_DIR/Dockerfile" output_file="$TEMP_DIR/Containerfile"
if [ -t 0 ]; then if [ -t 0 ]; then
printf '%s\n' "$1" >>"$output_file" printf '%s\n' "$1" >>"$output_file"
else else
@ -53,9 +53,9 @@ add_to_docker() {
} }
# Update and install UBI9 components if UBI9 base image is used # Update and install UBI9 components if UBI9 base image is used
if [[ $docker_base == *"registry.access.redhat.com/ubi9"* ]]; then if [[ $container_base == *"registry.access.redhat.com/ubi9"* ]]; then
add_to_docker << EOF add_to_container << EOF
FROM $docker_base FROM $container_base
WORKDIR /app WORKDIR /app
RUN microdnf -y update && microdnf install -y iputils net-tools wget \ RUN microdnf -y update && microdnf install -y iputils net-tools wget \
@ -64,8 +64,8 @@ RUN microdnf -y update && microdnf install -y iputils net-tools wget \
EOF EOF
else else
add_to_docker << EOF add_to_container << EOF
FROM $docker_base FROM $container_base
WORKDIR /app WORKDIR /app
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
@ -82,7 +82,7 @@ fi
# Add pip dependencies first since llama-stack is what will change most often # Add pip dependencies first since llama-stack is what will change most often
# so we can reuse layers. # so we can reuse layers.
if [ -n "$pip_dependencies" ]; then if [ -n "$pip_dependencies" ]; then
add_to_docker << EOF add_to_container << EOF
RUN pip install --no-cache $pip_dependencies RUN pip install --no-cache $pip_dependencies
EOF EOF
fi fi
@ -90,7 +90,7 @@ fi
if [ -n "$special_pip_deps" ]; then if [ -n "$special_pip_deps" ]; then
IFS='#' read -ra parts <<<"$special_pip_deps" IFS='#' read -ra parts <<<"$special_pip_deps"
for part in "${parts[@]}"; do for part in "${parts[@]}"; do
add_to_docker <<EOF add_to_container <<EOF
RUN pip install --no-cache $part RUN pip install --no-cache $part
EOF EOF
done done
@ -108,16 +108,16 @@ if [ -n "$LLAMA_STACK_DIR" ]; then
# Install in editable format. We will mount the source code into the container # Install in editable format. We will mount the source code into the container
# so that changes will be reflected in the container without having to do a # so that changes will be reflected in the container without having to do a
# rebuild. This is just for development convenience. # rebuild. This is just for development convenience.
add_to_docker << EOF add_to_container << EOF
RUN pip install --no-cache -e $stack_mount RUN pip install --no-cache -e $stack_mount
EOF EOF
else else
if [ -n "$TEST_PYPI_VERSION" ]; then if [ -n "$TEST_PYPI_VERSION" ]; then
# these packages are damaged in test-pypi, so install them first # these packages are damaged in test-pypi, so install them first
add_to_docker << EOF add_to_container << EOF
RUN pip install fastapi libcst RUN pip install fastapi libcst
EOF EOF
add_to_docker << EOF add_to_container << EOF
RUN pip install --no-cache --extra-index-url https://test.pypi.org/simple/ \ RUN pip install --no-cache --extra-index-url https://test.pypi.org/simple/ \
llama-models==$TEST_PYPI_VERSION llama-stack-client==$TEST_PYPI_VERSION llama-stack==$TEST_PYPI_VERSION llama-models==$TEST_PYPI_VERSION llama-stack-client==$TEST_PYPI_VERSION llama-stack==$TEST_PYPI_VERSION
@ -128,7 +128,7 @@ EOF
else else
SPEC_VERSION="llama-stack" SPEC_VERSION="llama-stack"
fi fi
add_to_docker << EOF add_to_container << EOF
RUN pip install --no-cache $SPEC_VERSION RUN pip install --no-cache $SPEC_VERSION
EOF EOF
fi fi
@ -140,14 +140,14 @@ if [ -n "$LLAMA_MODELS_DIR" ]; then
exit 1 exit 1
fi fi
add_to_docker << EOF add_to_container << EOF
RUN pip uninstall -y llama-models RUN pip uninstall -y llama-models
RUN pip install --no-cache $models_mount RUN pip install --no-cache $models_mount
EOF EOF
fi fi
add_to_docker << EOF add_to_container << EOF
# This would be good in production but for debugging flexibility lets not add it right now # This would be good in production but for debugging flexibility lets not add it right now
# We need a more solid production ready entrypoint.sh anyway # We need a more solid production ready entrypoint.sh anyway
@ -156,8 +156,8 @@ ENTRYPOINT ["python", "-m", "llama_stack.distribution.server.server", "--templat
EOF EOF
printf "Dockerfile created successfully in $TEMP_DIR/Dockerfile\n\n" printf "Containerfile created successfully in $TEMP_DIR/Containerfile\n\n"
cat $TEMP_DIR/Dockerfile cat $TEMP_DIR/Containerfile
printf "\n" printf "\n"
mounts="" mounts=""
@ -170,7 +170,7 @@ fi
if command -v selinuxenabled &>/dev/null && selinuxenabled; then if command -v selinuxenabled &>/dev/null && selinuxenabled; then
# Disable SELinux labels -- we don't want to relabel the llama-stack source dir # Disable SELinux labels -- we don't want to relabel the llama-stack source dir
DOCKER_OPTS="$DOCKER_OPTS --security-opt label=disable" CONTAINER_OPTS="$CONTAINER_OPTS --security-opt label=disable"
fi fi
# Set version tag based on PyPI version # Set version tag based on PyPI version
@ -200,7 +200,7 @@ else
fi fi
set -x set -x
$DOCKER_BINARY build $DOCKER_OPTS $PLATFORM -t $image_tag -f "$TEMP_DIR/Dockerfile" "$REPO_DIR" $mounts $CONTAINER_BINARY build $CONTAINER_OPTS $PLATFORM -t $image_tag -f "$TEMP_DIR/Containerfile" "$REPO_DIR" $mounts
# clean up tmp/configs # clean up tmp/configs
set +x set +x

View file

@ -6,8 +6,8 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
DOCKER_BINARY=${DOCKER_BINARY:-docker} CONTAINER_BINARY=${CONTAINER_BINARY:-docker}
DOCKER_OPTS=${DOCKER_OPTS:-} CONTAINER_OPTS=${CONTAINER_OPTS:-}
LLAMA_STACK_DIR=${LLAMA_STACK_DIR:-} LLAMA_STACK_DIR=${LLAMA_STACK_DIR:-}
set -euo pipefail set -euo pipefail
@ -30,7 +30,7 @@ container_build_dir="/app/builds"
if command -v selinuxenabled &> /dev/null && selinuxenabled; then if command -v selinuxenabled &> /dev/null && selinuxenabled; then
# Disable SELinux labels # Disable SELinux labels
DOCKER_OPTS="$DOCKER_OPTS --security-opt label=disable" CONTAINER_OPTS="$CONTAINER_OPTS --security-opt label=disable"
fi fi
mounts="" mounts=""
@ -39,7 +39,7 @@ if [ -n "$LLAMA_STACK_DIR" ]; then
fi fi
set -x set -x
$DOCKER_BINARY run $DOCKER_OPTS -it \ $CONTAINER_BINARY run $CONTAINER_OPTS -it \
--entrypoint "/usr/local/bin/llama" \ --entrypoint "/usr/local/bin/llama" \
-v $host_build_dir:$container_build_dir \ -v $host_build_dir:$container_build_dir \
$mounts \ $mounts \

View file

@ -6,8 +6,8 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
DOCKER_BINARY=${DOCKER_BINARY:-docker} CONTAINER_BINARY=${CONTAINER_BINARY:-docker}
DOCKER_OPTS=${DOCKER_OPTS:-} CONTAINER_OPTS=${CONTAINER_OPTS:-}
LLAMA_CHECKPOINT_DIR=${LLAMA_CHECKPOINT_DIR:-} LLAMA_CHECKPOINT_DIR=${LLAMA_CHECKPOINT_DIR:-}
LLAMA_STACK_DIR=${LLAMA_STACK_DIR:-} LLAMA_STACK_DIR=${LLAMA_STACK_DIR:-}
TEST_PYPI_VERSION=${TEST_PYPI_VERSION:-} TEST_PYPI_VERSION=${TEST_PYPI_VERSION:-}
@ -64,7 +64,7 @@ set -x
if command -v selinuxenabled &> /dev/null && selinuxenabled; then if command -v selinuxenabled &> /dev/null && selinuxenabled; then
# Disable SELinux labels # Disable SELinux labels
DOCKER_OPTS="$DOCKER_OPTS --security-opt label=disable" CONTAINER_OPTS="$CONTAINER_OPTS --security-opt label=disable"
fi fi
mounts="" mounts=""
@ -73,7 +73,7 @@ if [ -n "$LLAMA_STACK_DIR" ]; then
fi fi
if [ -n "$LLAMA_CHECKPOINT_DIR" ]; then if [ -n "$LLAMA_CHECKPOINT_DIR" ]; then
mounts="$mounts -v $LLAMA_CHECKPOINT_DIR:/root/.llama" mounts="$mounts -v $LLAMA_CHECKPOINT_DIR:/root/.llama"
DOCKER_OPTS="$DOCKER_OPTS --gpus=all" CONTAINER_OPTS="$CONTAINER_OPTS --gpus=all"
fi fi
version_tag="latest" version_tag="latest"
@ -85,7 +85,7 @@ elif [ -n "$TEST_PYPI_VERSION" ]; then
version_tag="test-$TEST_PYPI_VERSION" version_tag="test-$TEST_PYPI_VERSION"
fi fi
$DOCKER_BINARY run $DOCKER_OPTS -it \ $CONTAINER_BINARY run $CONTAINER_OPTS -it \
-p $port:$port \ -p $port:$port \
$env_vars \ $env_vars \
-v "$yaml_config:/app/config.yaml" \ -v "$yaml_config:/app/config.yaml" \

View file

@ -150,7 +150,7 @@ class InlineProviderSpec(ProviderSpec):
container_image: Optional[str] = Field( container_image: Optional[str] = Field(
default=None, default=None,
description=""" description="""
The docker image to use for this implementation. If one is provided, pip_packages will be ignored. The container image to use for this implementation. If one is provided, pip_packages will be ignored.
If a provider depends on other providers, the dependencies MUST NOT specify a container image. If a provider depends on other providers, the dependencies MUST NOT specify a container image.
""", """,
) )