[CICD] Github workflow for publishing Docker images (#764)

# What does this PR do?

- Add Github workflow for publishing docker images. 
- Manual Inputs
- We can use a (1) TestPyPi version / (2) build via released PyPi
version

**Notes**
- Keep this workflow manually triggered as we don't want to publish
nightly docker images

**Additional Changes**
- Resolve issue with running llama stack build in non-terminal device
```
  File "/home/runner/.local/lib/python3.12/site-packages/llama_stack/distribution/utils/exec.py", line 25, in run_with_pty
    old_settings = termios.tcgetattr(sys.stdin)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
termios.error: (25, 'Inappropriate ioctl for device')
```
- Modified build_container.sh to work in non-terminal environment


## Test Plan

- Triggered workflow:
3562217878
<img width="1076" alt="image"
src="https://github.com/user-attachments/assets/f1b5cef6-05ab-49c7-b405-53abc9264734"
/>


- Tested published docker image
<img width="702" alt="image"
src="https://github.com/user-attachments/assets/e7135189-65c8-45d8-86f9-9f3be70e380b"
/>


- /tools API endpoints are served so that docker is correctly using the
TestPyPi package
<img width="296" alt="image"
src="https://github.com/user-attachments/assets/bbcaa7fe-c0a4-4d22-b600-90e3c254bbfd"
/>

- Published tagged images:
https://hub.docker.com/repositories/llamastack
<img width="947" alt="image"
src="https://github.com/user-attachments/assets/2a0a0494-4d45-4643-bc29-72154ecc54a5"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
This commit is contained in:
Xi Yan 2025-01-15 09:01:33 -08:00 committed by GitHub
parent b78e6675ea
commit 32d3abe964
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
5 changed files with 145 additions and 21 deletions

View file

@ -54,7 +54,7 @@ add_to_docker() {
# Update and install UBI9 components if UBI9 base image is used
if [[ $docker_base == *"registry.access.redhat.com/ubi9"* ]]; then
add_to_docker <<EOF
add_to_docker << EOF
FROM $docker_base
WORKDIR /app
@ -64,7 +64,7 @@ RUN microdnf -y update && microdnf install -y iputils net-tools wget \
EOF
else
add_to_docker <<EOF
add_to_docker << EOF
FROM $docker_base
WORKDIR /app
@ -82,13 +82,17 @@ fi
# Add pip dependencies first since llama-stack is what will change most often
# so we can reuse layers.
if [ -n "$pip_dependencies" ]; then
add_to_docker "RUN pip install --no-cache $pip_dependencies"
add_to_docker << EOF
RUN pip install --no-cache $pip_dependencies
EOF
fi
if [ -n "$special_pip_deps" ]; then
IFS='#' read -ra parts <<<"$special_pip_deps"
for part in "${parts[@]}"; do
add_to_docker "RUN pip install --no-cache $part"
add_to_docker <<EOF
RUN pip install --no-cache $part
EOF
done
fi
@ -104,14 +108,19 @@ if [ -n "$LLAMA_STACK_DIR" ]; then
# Install in editable format. We will mount the source code into the container
# so that changes will be reflected in the container without having to do a
# rebuild. This is just for development convenience.
add_to_docker "RUN pip install --no-cache -e $stack_mount"
add_to_docker << EOF
RUN pip install --no-cache -e $stack_mount
EOF
else
if [ -n "$TEST_PYPI_VERSION" ]; then
# these packages are damaged in test-pypi, so install them first
add_to_docker "RUN pip install fastapi libcst"
add_to_docker <<EOF
add_to_docker << EOF
RUN pip install fastapi libcst
EOF
add_to_docker << EOF
RUN pip install --no-cache --extra-index-url https://test.pypi.org/simple/ \
llama-models==$TEST_PYPI_VERSION llama-stack-client==$TEST_PYPI_VERSION llama-stack==$TEST_PYPI_VERSION
EOF
else
if [ -n "$PYPI_VERSION" ]; then
@ -119,7 +128,9 @@ EOF
else
SPEC_VERSION="llama-stack"
fi
add_to_docker "RUN pip install --no-cache $SPEC_VERSION"
add_to_docker << EOF
RUN pip install --no-cache $SPEC_VERSION
EOF
fi
fi
@ -129,14 +140,14 @@ if [ -n "$LLAMA_MODELS_DIR" ]; then
exit 1
fi
add_to_docker <<EOF
add_to_docker << EOF
RUN pip uninstall -y llama-models
RUN pip install --no-cache $models_mount
EOF
fi
add_to_docker <<EOF
add_to_docker << EOF
# This would be good in production but for debugging flexibility lets not add it right now
# We need a more solid production ready entrypoint.sh anyway