Commit graph

44 commits

Author SHA1 Message Date
Ashwin Bharambe
e8e8fe7c93 fix: add LLAMA_STACK_CLIENT_DIR mount when installing in docker from source 2025-02-24 10:00:57 -08:00
Ashwin Bharambe
641549c631 Add llama stack client overrides also; necessary for correct docker building 2025-02-24 07:51:11 -08:00
Ashwin Bharambe
0973d386e6 fix: update build_container.sh to ensure llama-models is installed first 2025-02-23 21:47:26 -08:00
Jamie Land
840fae2259
fix: Updating images so that they are able to run without root access (#1208)
# What does this PR do?
Addresses issues where the container is unable to run as root. Gives
write access to required folders.

[//]: # (If resolving an issue, uncomment and update the line below)
(Closes #[1207])

## Test Plan
I built locally and ran `llama stack build --template remote-vllm
--image-type container` and validated I could see my changes in the
output:

```
#11 1.186 Installed 11 packages in 61ms
#11 1.186  + llama-models==0.1.3
#11 1.186  + llama-stack==0.1.3
#11 1.186  + llama-stack-client==0.1.3
#11 1.186  + markdown-it-py==3.0.0
#11 1.186  + mdurl==0.1.2
#11 1.186  + prompt-toolkit==3.0.50
#11 1.186  + pyaml==25.1.0
#11 1.186  + pygments==2.19.1
#11 1.186  + rich==13.9.4
#11 1.186  + tiktoken==0.9.0
#11 1.186  + wcwidth==0.2.13
#11 DONE 1.6s

#12 [ 9/10] RUN mkdir -p /.llama /.cache
#12 DONE 0.3s

#13 [10/10] RUN chmod -R g+rw /app /.llama /.cache
#13 DONE 0.3s

#14 exporting to image
#14 exporting layers
#14 exporting layers 3.7s done
#14 writing image sha256:11cc8bd954db6d036037bcaf471b173ddd5261ac4b1e72074cccf85d18aefb96 done
#14 naming to docker.io/library/distribution-remote-vllm:0.1.3 done
#14 DONE 3.7s
+ set +x
Success!
```
This is what the resulting image looks like:


![image](https://github.com/user-attachments/assets/070b9c05-b40f-4e7e-aa24-fef260c395e3)

Also tagged the image as `0.1.3-test` and [pushed to
quay](https://quay.io/repository/jland/distribution-remote-vllm?tab=tags)
(note there are a bunch of critical vulnerabilities we may want to look
into)

And for good measure I deployed the resulting image on my Openshift
environment using the default Security Context and validated that there
were no issue with it coming up.

My validation was all done with the `vllm-remote` distribution, but if I
am understanding everything correctly the other distributions are just
different run.yaml configs.


[//]: # (## Documentation)


Please let me know if there is anything else I need to do.

Co-authored-by: Jamie Land <hokie10@gmail.com>
2025-02-21 11:32:56 -05:00
Yuan Tang
7558678b8c
Fix uv pip install timeout issue for PyTorch (#929)
This fixes the following timeout issue when installing PyTorch via uv.
Also see reference: https://github.com/astral-sh/uv/pull/1694,
https://github.com/astral-sh/uv/issues/1549

```
Installing pip dependencies
Using Python 3.10.16 environment at: /home/yutang/.conda/envs/distribution-myenv
  × Failed to download and build `antlr4-python3-runtime==4.9.3`
  ├─▶ Failed to extract archive
  ├─▶ failed to unpack
  │   `/home/yutang/.cache/uv/sdists-v7/.tmpDWX4iK/antlr4-python3-runtime-4.9.3/src/antlr4/ListTokenSource.py`
  ├─▶ failed to unpack
  │   `antlr4-python3-runtime-4.9.3/src/antlr4/ListTokenSource.py` into
  │   `/home/yutang/.cache/uv/sdists-v7/.tmpDWX4iK/antlr4-python3-runtime-4.9.3/src/antlr4/ListTokenSource.py`
  ├─▶ error decoding response body
  ├─▶ request or response body error
  ╰─▶ operation timed out
  help: `antlr4-python3-runtime` (v4.9.3) was included because `torchtune`
        (v0.5.0) depends on `omegaconf` (v2.3.0) which depends on
        `antlr4-python3-runtime>=4.9.dev0, <4.10.dev0`
Failed to build target distribution-myenv with return code 1
```

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-02-03 06:39:35 -08:00
Yuan Tang
4773092dd1
Fix UBI9 image build when installing Python packages via uv (#926)
This was missed in https://github.com/meta-llama/llama-stack/pull/921. 

cc @ashwinb @hardikjshah

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-02-01 19:14:29 -08:00
Ashwin Bharambe
b03e093e80 Add a COPY option for copying source files into docker 2025-02-01 15:35:38 -08:00
Ashwin Bharambe
942e8b96ac Fix uv pip uninstall 2025-02-01 11:42:28 -08:00
Ashwin Bharambe
5b1e69e58e
Use uv pip install instead of pip install (#921)
## What does this PR do? 

See issue: #747 -- `uv` is just plain better. This PR does the bare
minimum of replacing `pip install` by `uv pip install` and ensuring `uv`
exists in the environment.

## Test Plan 

First: create new conda, `uv pip install -e .` on `llama-stack` -- all
is good.
Next: run `llama stack build --template together` followed by `llama
stack run together` -- all good
Next: run `llama stack build --template together --image-name yoyo`
followed by `llama stack run together --image-name yoyo` -- all good
Next: fresh conda and `uv pip install -e .` and `llama stack build
--template together --image-type venv` -- all good.

Docker: `llama stack build --template together --image-type container`
works!
2025-01-31 22:29:41 -08:00
Ashwin Bharambe
891bf704eb
Ensure llama stack build --config <> --image-type <> works (#879)
Fix the issues brought up in
https://github.com/meta-llama/llama-stack/issues/870

Test all combinations of (conda, container) vs. (template, config)
combos.
2025-01-25 11:13:36 -08:00
Ashwin Bharambe
a8345f5f76 Fix llama stack build docker creation to have correct entrypoint 2025-01-22 16:53:54 -08:00
Xi Yan
74f6af8bbe
[CICD] add simple test step for docker build workflow, fix prefix bug (#821)
# What does this PR do?

**Main Thing**
- Add a simple test step before publishing docker image in workflow

**Side Fix**
- Docker push action fails recently due to extra prefix introduced. E.g.
see:
https://github.com/meta-llama/llama-stack/pull/802#issuecomment-2599507062

cc @terrytangyuan 

## Test Plan

1. Release a TestPyPi version on this code: 0.0.63.dev51206766


3581203331

```
# 1. build docker image
TEST_PYPI_VERSION=0.0.63.dev51206766 llama stack build --template fireworks

# 2. test the docker image
cd distributions/fireworks && docker compose up
```

4. Test the full build + test docker flow using TestPyPi from (1):
1284218494

<img width="1049" alt="image"
src="https://github.com/user-attachments/assets/c025893d-5ce2-48ff-aa90-de00e105ee09"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-18 15:16:05 -08:00
Yuan Tang
6da3053c0e
More generic image type for OCI-compliant container technologies (#802)
It's a more generic term and applicable to alternatives of Docker, such
as Podman or other OCI-compliant technologies.

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-01-17 16:37:42 -08:00
Xi Yan
32d3abe964
[CICD] Github workflow for publishing Docker images (#764)
# What does this PR do?

- Add Github workflow for publishing docker images. 
- Manual Inputs
- We can use a (1) TestPyPi version / (2) build via released PyPi
version

**Notes**
- Keep this workflow manually triggered as we don't want to publish
nightly docker images

**Additional Changes**
- Resolve issue with running llama stack build in non-terminal device
```
  File "/home/runner/.local/lib/python3.12/site-packages/llama_stack/distribution/utils/exec.py", line 25, in run_with_pty
    old_settings = termios.tcgetattr(sys.stdin)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
termios.error: (25, 'Inappropriate ioctl for device')
```
- Modified build_container.sh to work in non-terminal environment


## Test Plan

- Triggered workflow:
3562217878
<img width="1076" alt="image"
src="https://github.com/user-attachments/assets/f1b5cef6-05ab-49c7-b405-53abc9264734"
/>


- Tested published docker image
<img width="702" alt="image"
src="https://github.com/user-attachments/assets/e7135189-65c8-45d8-86f9-9f3be70e380b"
/>


- /tools API endpoints are served so that docker is correctly using the
TestPyPi package
<img width="296" alt="image"
src="https://github.com/user-attachments/assets/bbcaa7fe-c0a4-4d22-b600-90e3c254bbfd"
/>

- Published tagged images:
https://hub.docker.com/repositories/llamastack
<img width="947" alt="image"
src="https://github.com/user-attachments/assets/2a0a0494-4d45-4643-bc29-72154ecc54a5"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-15 09:01:33 -08:00
Jeff Tang
91907b714e
added support of PYPI_VERSION in stack build (#762)
# What does this PR do?

To build a conda env for specific Llama Stack version, e.g.

`PYPI_VERSION=0.0.58 llama stack build --template together --image-type
conda`
will install these in the llamastack-together env:
```
llama_models                             0.0.58
llama_stack                              0.0.58
llama_stack_client                       0.0.58
```
Without `PYPI_VERSION=`, `llama stack build --template together
--image-type conda` installs the latest all.


In short, provide a summary of what this PR does and why. Usually, the
relevant context should be present in a linked issue.

- [ ] Addresses issue (#issue)


## Test Plan

Please describe:
 - tests you ran to verify your changes with result summaries.
 - provide instructions so it can be reproduced.


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-14 13:45:42 -08:00
Yuan Tang
9173e35bd5
Fix incorrect Python binary path for UBI9 image (#757)
This was missed during a rebase in
https://github.com/meta-llama/llama-stack/pull/676.

Fixed the following error:
```
Error: crun: executable file `python` not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not found
++ error_handler 88
++ echo 'Error occurred in script at line: 88'
Error occurred in script at line: 88
```

cc @hardikjshah

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-01-13 20:17:21 -08:00
Yuan Tang
e45592e229
Support building UBI9 base container image (#676)
This adds support for [UBI9 (Red Hat Universal Base Image
9)](615bcf606f).
Tested `registry.access.redhat.com/ubi9/ubi-minimal:9.5`.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-01-13 13:41:56 -08:00
Yuan Tang
8146dce11e
Add missing newlines before printing the Dockerfile content (#700)
Before:
```
Dockerfile created successfully in /tmp/tmp.qyMdb0vI8X/DockerfileFROM python:3.10-slim
WORKDIR /app

RUN apt-get update && apt-get install -y        iputils-ping net-tools iproute2 dnsutils telnet        curl wget telnet        procps psmisc lsof        traceroute        bubblewrap        && rm -rf /var/lib/apt/lists/*
```

After:
```
Dockerfile created successfully in /tmp/tmp.qyMdb0vI8X/Dockerfile

FROM python:3.10-slim
WORKDIR /app

RUN apt-get update && apt-get install -y        iputils-ping net-tools iproute2 dnsutils telnet        curl wget telnet        procps psmisc lsof        traceroute        bubblewrap        && rm -rf /var/lib/apt/lists/*

```

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-01-02 09:04:29 -08:00
Ashwin Bharambe
681322731b
Make run yaml optional so dockers can start with just --env (#492)
When running with dockers, the idea is that users be able to work purely
with the `llama stack` CLI. They should not need to know about the
existence of any YAMLs unless they need to. This PR enables it.

The docker command now doesn't need to volume mount a yaml and can
simply be:
```bash
docker run -v ~/.llama/:/root/.llama \
  --env A=a --env B=b
```

## Test Plan

Check with conda first (no regressions):
```bash
LLAMA_STACK_DIR=. llama stack build --template ollama
llama stack run ollama --port 5001

# server starts up correctly
```

Check with docker
```bash
# build the docker
LLAMA_STACK_DIR=. llama stack build --template ollama --image-type docker

export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"

docker run -it  -p 5001:5001 \
  -v ~/.llama:/root/.llama \
  -v $PWD:/app/llama-stack-source \
  localhost/distribution-ollama:dev \
  --port 5001 \
  --env INFERENCE_MODEL=$INFERENCE_MODEL \
  --env OLLAMA_URL=http://host.docker.internal:11434
```

Note that volume mounting to `/app/llama-stack-source` is only needed
because we built the docker with uncommitted source code.
2024-11-20 13:11:40 -08:00
Ashwin Bharambe
887ccc2143 Ensure llama-stack-client is installed in the container with TEST_PYPI 2024-11-19 15:21:10 -08:00
Ashwin Bharambe
394519d68a Add llama-stack-client as a legitimate dependency for llama-stack 2024-11-19 11:44:35 -08:00
Ashwin Bharambe
c46b462c22 Updates to docker build script 2024-11-19 11:36:53 -08:00
Ashwin Bharambe
2a31163178
Auto-generate distro yamls + docs (#468)
# What does this PR do?

Automatically generates
- build.yaml
- run.yaml
- run-with-safety.yaml
- parts of markdown docs

for the distributions.

## Test Plan

At this point, this only updates the YAMLs and the docs. Some testing
(especially with ollama and vllm) has been performed but needs to be
much more tested.
2024-11-18 14:57:06 -08:00
Ashwin Bharambe
1aeac7b9f7 Change order of building the Docker 2024-11-12 13:09:04 -08:00
Ashwin Bharambe
998419ffb2 use image tag actually! 2024-11-12 12:57:08 -08:00
Ashwin Bharambe
896b304e62 Use tags for docker images instead of changing image name 2024-11-12 12:42:30 -08:00
Ashwin Bharambe
f4426f6a43 Fix bug in llama stack build; SERVER_DEPENDENCIES were dropped 2024-11-11 20:12:13 -08:00
Ashwin Bharambe
36da9a600e add explicit platform 2024-11-11 19:30:15 -08:00
Ashwin Bharambe
218803b7c8 add pypi version to docker tag 2024-11-11 19:20:31 -08:00
Ashwin Bharambe
343458479d Make sure TEST_PYPI_VERSION is used in docker builds 2024-11-11 18:40:13 -08:00
Xi Yan
748606195b
Kill llama stack configure (#371)
* remove configure

* build msg

* wip

* build->run

* delete prints

* docs

* fix docs, kill configure

* precommit

* update fireworks build

* docs

* clean up build

* comments

* fix

* test

* remove baking build.yaml into docker

* fix msg, urls

* configure msg
2024-11-06 13:32:10 -08:00
Steve Grubb
f04b566c5c
Do not cache pip (#349)
Pip has a 3.3GB cache of torch and friends. Do not keep this in the image.
2024-10-31 09:52:40 -07:00
Ashwin Bharambe
afae4e3d8e Update docker build flow a little 2024-10-25 10:06:21 -07:00
Ashwin Bharambe
05a8d47b98 Add a meta-reference-quantized-gpu distribution 2024-10-23 21:45:50 -07:00
Xi Yan
4d2bd2d39e
add more distro templates (#279)
* verify dockers

* together distro verified

* readme

* fireworks distro

* fireworks compose up

* fireworks verified
2024-10-21 18:15:08 -07:00
Xi Yan
62d266f018
[CLI] avoid configure twice (#171)
* avoid configure twice

* cleanup tmp config

* update output msg

* address comment

* update msg

* script update
2024-10-03 11:20:54 -07:00
Russell Bryant
204eb6d810
docker: Check for selinux before using --security-opt (#167)
Before using `--security-opt label=disable`, check that SELinux is
enabled. Otherwise, the option is not relevant.

This fixes errors on Mac.

Closes #166

Signed-off-by: Russell Bryant <rbryant@redhat.com>
2024-10-02 10:37:41 -07:00
Xi Yan
d28c3dfe0f
[CLI] simplify docker run (#159)
* bake run.yaml inside docker, simplify run

* add docker template examples

* delete generated Dockerfile

* unique deps

* clean up debug

* default entrypoint

* address comments, update output msg

* update msg

* build output msg

* configure msg

* unique special_deps

* remove quotes in configure
2024-09-30 15:04:04 -07:00
Russell Bryant
8db49de961
docker: Install in editable mode for dev purposes (#160)
While rebuilding a stack using the `docker` image type and having
`LLAMA_STACK_DIR` set so it installs `llama_stack` from my local
source, I noticed that once built, it just used the image build cache
and didn't pull in changes to my source.

1. Install in editable mode (`pip install -e`) for dev purposes.

2. Mount the source into the container for `configure` and `run` so
   that the editable install works.

Signed-off-by: Russell Bryant <rbryant@redhat.com>
2024-09-30 11:56:31 -07:00
Russell Bryant
cb36be320f
Fix podman+selinux compatibility (#132)
When I ran `llama stack configure` for my `docker` based stack on my
system using podman + SELinux (CentOS Stream 9), The `podman run`
command failed due to SELinux blocking access to the volume mount.

As a simple fix, disable SELinux label checking.

Signed-off-by: Russell Bryant <rbryant@redhat.com>
2024-09-29 20:19:44 -07:00
Xi Yan
ca7602a642 fix #100 2024-09-25 15:11:56 -07:00
Ashwin Bharambe
bda974e660 Make the "all-remote" distribution lightweight in dependencies and size 2024-09-24 14:18:57 -07:00
Ashwin Bharambe
9eb01dd664 Add DOCKER_BINARY / DOCKER_OPTS to all scripts 2024-09-19 10:26:41 -07:00
Ashwin Bharambe
9487ad8294
API Updates (#73)
* API Keys passed from Client instead of distro configuration

* delete distribution registry

* Rename the "package" word away

* Introduce a "Router" layer for providers

Some providers need to be factorized and considered as thin routing
layers on top of other providers. Consider two examples:

- The inference API should be a routing layer over inference providers,
  routed using the "model" key
- The memory banks API is another instance where various memory bank
  types will be provided by independent providers (e.g., a vector store
  is served by Chroma while a keyvalue memory can be served by Redis or
  PGVector)

This commit introduces a generalized routing layer for this purpose.

* update `apis_to_serve`

* llama_toolchain -> llama_stack

* Codemod from llama_toolchain -> llama_stack

- added providers/registry
- cleaned up api/ subdirectories and moved impls away
- restructured api/api.py
- from llama_stack.apis.<api> import foo should work now
- update imports to do llama_stack.apis.<api>
- update many other imports
- added __init__, fixed some registry imports
- updated registry imports
- create_agentic_system -> create_agent
- AgenticSystem -> Agent

* Moved some stuff out of common/; re-generated OpenAPI spec

* llama-toolchain -> llama-stack (hyphens)

* add control plane API

* add redis adapter + sqlite provider

* move core -> distribution

* Some more toolchain -> stack changes

* small naming shenanigans

* Removing custom tool and agent utilities and moving them client side

* Move control plane to distribution server for now

* Remove control plane from API list

* no codeshield dependency randomly plzzzzz

* Add "fire" as a dependency

* add back event loggers

* stack configure fixes

* use brave instead of bing in the example client

* add init file so it gets packaged

* add init files so it gets packaged

* Update MANIFEST

* bug fix

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Xi Yan <xiyan@meta.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
2024-09-17 19:51:35 -07:00
Renamed from llama_toolchain/core/build_container.sh (Browse further)