llama-stack-mirror/tests/Containerfile
Sébastien Han c8b5774ff3
ci: use ollama container image with loaded models
Instead of downloading the models each time we now have a single Ollama
container that is baked with the models pulled and ready to use.

This will remove the CI flakiness on model pulling.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-06 11:54:22 +02:00

13 lines
451 B
Docker

# Containerfile used to build our all in one ollama image to run tests in CI
# podman build --platform linux/amd64 -f Containerfile -t ollama-with-models .
#
FROM --platform=linux/amd64 ollama/ollama:latest
# Start ollama and pull models in a single layer
RUN ollama serve & \
sleep 5 && \
ollama pull llama3.2:3b-instruct-fp16 && \
ollama pull all-minilm:latest
# Set the entrypoint to start ollama serve
ENTRYPOINT ["ollama", "serve"]