llama-stack-mirror/.github/actions/setup-vision-ollama/action.yml

11 lines
412 B
YAML

name: Setup Ollama for Vision Tests
description: Start Ollama with vision model
runs:
using: "composite"
steps:
- name: Start Ollama
shell: bash
run: |
docker run -d --name ollama -p 11434:11434 docker.io/llamastack/ollama-with-vision-model
echo "Verifying Ollama status..."
timeout 30 bash -c 'while ! curl -s -L http://127.0.0.1:11434; do sleep 1 && echo "."; done'