# What does this PR do?
currently the "default" dir for external providers is
`/etc/llama-stack/providers.d`
This dir is not used anywhere nor created.
Switch to a more friendly `~/.llama/providers.d/`
This allows external providers to actually create this dir and/or
populate it upon installation, `pip` cannot create directories in `etc`.
If a user does not specify a dir, default to this one
see https://github.com/containers/ramalama-stack/issues/36
Signed-off-by: Charlie Doern <cdoern@redhat.com>
As part of the build process, we now include the generated run.yaml
(based of the provided build configuration file) into the container. We
updated the entrypoint to use this run configuration as well.
Given this simple distribution configuration:
```
# build.yaml
version: '2'
distribution_spec:
description: Use (an external) Ollama server for running LLM inference
providers:
inference:
- remote::ollama
vector_io:
- inline::faiss
safety:
- inline::llama-guard
agents:
- inline::meta-reference
telemetry:
- inline::meta-reference
eval:
- inline::meta-reference
datasetio:
- remote::huggingface
- inline::localfs
scoring:
- inline::basic
- inline::llm-as-judge
- inline::braintrust
tool_runtime:
- remote::brave-search
- remote::tavily-search
- inline::code-interpreter
- inline::rag-runtime
- remote::model-context-protocol
- remote::wolfram-alpha
container_image: "registry.access.redhat.com/ubi9"
image_type: container
image_name: test
```
Build it:
```
llama stack build --config build.yaml
```
Run it:
```
podman run --rm \
-p 8321:8321 \
-e OLLAMA_URL=http://host.containers.internal:11434 \
--name llama-stack-server \
localhost/leseb-test:0.2.2
```
Signed-off-by: Sébastien Han <seb@redhat.com>
# What does this PR do?
Fixes the UBI 9 container build failure ( `error: command 'gcc' failed`
when installing `polyleven`, `faiss`, etc.) by installing the missing
compiler tool‑chain:
- `python3.11-devel gcc` make added to the UBI 9 `dnf install` line.
### Closes#1970
## Test Plan
- Build a distro with an UBI image
# What does this PR do?
This should fix https://github.com/meta-llama/llama-stack/issues/1716
## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]
[//]: # (## Documentation)
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
# What does this PR do?
This PR updates `build_container.sh` to prevent an "unknown flag" error
when using the `BUILD_PLATFORM` environment variable during `llama stack
build`.
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])
Closes#1699
## Test Plan
Running the following code with out these changes results in an "unknown
flag" error.
```
CONTAINER_BINARY=podman BUILD_PLATFORM=linux/amd64 llama stack build --template ollama --image-type container
```
With these changes, the same command should build the image correctly.
Signed-off-by: Michael Clifford <mcliffor@redhat.com>
# What does this PR do?
Update the container build script so that it is compatible with podman.
The --progress=plain is now the default option and can be overriden.
## Test Plan
N/A
[//]: # (## Documentation)
Signed-off-by: Jeff MAURY <jmaury@redhat.com>
# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])
## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]
[//]: # (## Documentation)
Signed-off-by: reidliu <reid201711@gmail.com>
Co-authored-by: reidliu <reid201711@gmail.com>
# What does this PR do?
When building providers in a virtual environment or containers, special
pip dependencies may not always be provided (e.g., for Ollama). The
check should only fail if the required number of arguments is missing.
Currently, two arguments are mandatory:
1. Environment name
2. Pip dependencies
Additionally, return statements were replaced with sys.exit(1) in error
conditions to ensure immediate termination on critical failures. Error
handling in the stack build process was also improved to guarantee the
program exits with status 1 when facing configuration issues or build
failures.
Signed-off-by: Sébastien Han <seb@redhat.com>
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])
## Test Plan
This command shouldn't fail:
```
llama stack build --template ollama --image-type venv
```
[//]: # (## Documentation)
Signed-off-by: Sébastien Han <seb@redhat.com>
# What does this PR do?
Addresses issues where the container is unable to run as root. Gives
write access to required folders.
[//]: # (If resolving an issue, uncomment and update the line below)
(Closes #[1207])
## Test Plan
I built locally and ran `llama stack build --template remote-vllm
--image-type container` and validated I could see my changes in the
output:
```
#11 1.186 Installed 11 packages in 61ms
#11 1.186 + llama-models==0.1.3
#11 1.186 + llama-stack==0.1.3
#11 1.186 + llama-stack-client==0.1.3
#11 1.186 + markdown-it-py==3.0.0
#11 1.186 + mdurl==0.1.2
#11 1.186 + prompt-toolkit==3.0.50
#11 1.186 + pyaml==25.1.0
#11 1.186 + pygments==2.19.1
#11 1.186 + rich==13.9.4
#11 1.186 + tiktoken==0.9.0
#11 1.186 + wcwidth==0.2.13
#11 DONE 1.6s
#12 [ 9/10] RUN mkdir -p /.llama /.cache
#12 DONE 0.3s
#13 [10/10] RUN chmod -R g+rw /app /.llama /.cache
#13 DONE 0.3s
#14 exporting to image
#14 exporting layers
#14 exporting layers 3.7s done
#14 writing image sha256:11cc8bd954db6d036037bcaf471b173ddd5261ac4b1e72074cccf85d18aefb96 done
#14 naming to docker.io/library/distribution-remote-vllm:0.1.3 done
#14 DONE 3.7s
+ set +x
Success!
```
This is what the resulting image looks like:

Also tagged the image as `0.1.3-test` and [pushed to
quay](https://quay.io/repository/jland/distribution-remote-vllm?tab=tags)
(note there are a bunch of critical vulnerabilities we may want to look
into)
And for good measure I deployed the resulting image on my Openshift
environment using the default Security Context and validated that there
were no issue with it coming up.
My validation was all done with the `vllm-remote` distribution, but if I
am understanding everything correctly the other distributions are just
different run.yaml configs.
[//]: # (## Documentation)
Please let me know if there is anything else I need to do.
Co-authored-by: Jamie Land <hokie10@gmail.com>
This fixes the following timeout issue when installing PyTorch via uv.
Also see reference: https://github.com/astral-sh/uv/pull/1694,
https://github.com/astral-sh/uv/issues/1549
```
Installing pip dependencies
Using Python 3.10.16 environment at: /home/yutang/.conda/envs/distribution-myenv
× Failed to download and build `antlr4-python3-runtime==4.9.3`
├─▶ Failed to extract archive
├─▶ failed to unpack
│ `/home/yutang/.cache/uv/sdists-v7/.tmpDWX4iK/antlr4-python3-runtime-4.9.3/src/antlr4/ListTokenSource.py`
├─▶ failed to unpack
│ `antlr4-python3-runtime-4.9.3/src/antlr4/ListTokenSource.py` into
│ `/home/yutang/.cache/uv/sdists-v7/.tmpDWX4iK/antlr4-python3-runtime-4.9.3/src/antlr4/ListTokenSource.py`
├─▶ error decoding response body
├─▶ request or response body error
╰─▶ operation timed out
help: `antlr4-python3-runtime` (v4.9.3) was included because `torchtune`
(v0.5.0) depends on `omegaconf` (v2.3.0) which depends on
`antlr4-python3-runtime>=4.9.dev0, <4.10.dev0`
Failed to build target distribution-myenv with return code 1
```
---------
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
## What does this PR do?
See issue: #747 -- `uv` is just plain better. This PR does the bare
minimum of replacing `pip install` by `uv pip install` and ensuring `uv`
exists in the environment.
## Test Plan
First: create new conda, `uv pip install -e .` on `llama-stack` -- all
is good.
Next: run `llama stack build --template together` followed by `llama
stack run together` -- all good
Next: run `llama stack build --template together --image-name yoyo`
followed by `llama stack run together --image-name yoyo` -- all good
Next: fresh conda and `uv pip install -e .` and `llama stack build
--template together --image-type venv` -- all good.
Docker: `llama stack build --template together --image-type container`
works!
# What does this PR do?
**Main Thing**
- Add a simple test step before publishing docker image in workflow
**Side Fix**
- Docker push action fails recently due to extra prefix introduced. E.g.
see:
https://github.com/meta-llama/llama-stack/pull/802#issuecomment-2599507062
cc @terrytangyuan
## Test Plan
1. Release a TestPyPi version on this code: 0.0.63.dev51206766
3581203331
```
# 1. build docker image
TEST_PYPI_VERSION=0.0.63.dev51206766 llama stack build --template fireworks
# 2. test the docker image
cd distributions/fireworks && docker compose up
```
4. Test the full build + test docker flow using TestPyPi from (1):
1284218494
<img width="1049" alt="image"
src="https://github.com/user-attachments/assets/c025893d-5ce2-48ff-aa90-de00e105ee09"
/>
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
It's a more generic term and applicable to alternatives of Docker, such
as Podman or other OCI-compliant technologies.
---------
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
# What does this PR do?
- Add Github workflow for publishing docker images.
- Manual Inputs
- We can use a (1) TestPyPi version / (2) build via released PyPi
version
**Notes**
- Keep this workflow manually triggered as we don't want to publish
nightly docker images
**Additional Changes**
- Resolve issue with running llama stack build in non-terminal device
```
File "/home/runner/.local/lib/python3.12/site-packages/llama_stack/distribution/utils/exec.py", line 25, in run_with_pty
old_settings = termios.tcgetattr(sys.stdin)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
termios.error: (25, 'Inappropriate ioctl for device')
```
- Modified build_container.sh to work in non-terminal environment
## Test Plan
- Triggered workflow:
3562217878
<img width="1076" alt="image"
src="https://github.com/user-attachments/assets/f1b5cef6-05ab-49c7-b405-53abc9264734"
/>
- Tested published docker image
<img width="702" alt="image"
src="https://github.com/user-attachments/assets/e7135189-65c8-45d8-86f9-9f3be70e380b"
/>
- /tools API endpoints are served so that docker is correctly using the
TestPyPi package
<img width="296" alt="image"
src="https://github.com/user-attachments/assets/bbcaa7fe-c0a4-4d22-b600-90e3c254bbfd"
/>
- Published tagged images:
https://hub.docker.com/repositories/llamastack
<img width="947" alt="image"
src="https://github.com/user-attachments/assets/2a0a0494-4d45-4643-bc29-72154ecc54a5"
/>
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
# What does this PR do?
To build a conda env for specific Llama Stack version, e.g.
`PYPI_VERSION=0.0.58 llama stack build --template together --image-type
conda`
will install these in the llamastack-together env:
```
llama_models 0.0.58
llama_stack 0.0.58
llama_stack_client 0.0.58
```
Without `PYPI_VERSION=`, `llama stack build --template together
--image-type conda` installs the latest all.
In short, provide a summary of what this PR does and why. Usually, the
relevant context should be present in a linked issue.
- [ ] Addresses issue (#issue)
## Test Plan
Please describe:
- tests you ran to verify your changes with result summaries.
- provide instructions so it can be reproduced.
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
This was missed during a rebase in
https://github.com/meta-llama/llama-stack/pull/676.
Fixed the following error:
```
Error: crun: executable file `python` not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not found
++ error_handler 88
++ echo 'Error occurred in script at line: 88'
Error occurred in script at line: 88
```
cc @hardikjshah
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
This adds support for [UBI9 (Red Hat Universal Base Image
9)](615bcf606f).
Tested `registry.access.redhat.com/ubi9/ubi-minimal:9.5`.
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
When running with dockers, the idea is that users be able to work purely
with the `llama stack` CLI. They should not need to know about the
existence of any YAMLs unless they need to. This PR enables it.
The docker command now doesn't need to volume mount a yaml and can
simply be:
```bash
docker run -v ~/.llama/:/root/.llama \
--env A=a --env B=b
```
## Test Plan
Check with conda first (no regressions):
```bash
LLAMA_STACK_DIR=. llama stack build --template ollama
llama stack run ollama --port 5001
# server starts up correctly
```
Check with docker
```bash
# build the docker
LLAMA_STACK_DIR=. llama stack build --template ollama --image-type docker
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
docker run -it -p 5001:5001 \
-v ~/.llama:/root/.llama \
-v $PWD:/app/llama-stack-source \
localhost/distribution-ollama:dev \
--port 5001 \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env OLLAMA_URL=http://host.docker.internal:11434
```
Note that volume mounting to `/app/llama-stack-source` is only needed
because we built the docker with uncommitted source code.
# What does this PR do?
Automatically generates
- build.yaml
- run.yaml
- run-with-safety.yaml
- parts of markdown docs
for the distributions.
## Test Plan
At this point, this only updates the YAMLs and the docs. Some testing
(especially with ollama and vllm) has been performed but needs to be
much more tested.
Before using `--security-opt label=disable`, check that SELinux is
enabled. Otherwise, the option is not relevant.
This fixes errors on Mac.
Closes#166
Signed-off-by: Russell Bryant <rbryant@redhat.com>