mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-03 19:57:35 +00:00
This PR refactors the integration test system to use global "setups" which provides better separation of concerns: **suites = what to test, setups = how to configure.** NOTE: if you naming suggestions, please provide feedback Changes: - New `tests/integration/setups.py` with global, reusable configurations (ollama, vllm, gpt, claude) - Modified `scripts/integration-tests.sh` options to match with the underlying pytest options - Updated documentation to reflect the new global setup system The main benefit is that setups can be reused across multiple suites (e.g., use "gpt" with any suite) even though sometimes they could specifically tailored for a suite (vision <> ollama-vision). It is now easier to add new configurations without modifying existing suites. Usage examples: - `pytest tests/integration --suite=responses --setup=gpt` - `pytest tests/integration --suite=vision` # auto-selects "ollama-vision" setup - `pytest tests/integration --suite=base --setup=vllm`
23 lines
690 B
YAML
23 lines
690 B
YAML
name: Setup Ollama
|
|
description: Start Ollama
|
|
inputs:
|
|
suite:
|
|
description: 'Test suite to use: base, responses, vision, etc.'
|
|
required: false
|
|
default: ''
|
|
runs:
|
|
using: "composite"
|
|
steps:
|
|
- name: Start Ollama
|
|
shell: bash
|
|
run: |
|
|
if [ "${{ inputs.suite }}" == "vision" ]; then
|
|
image="ollama-with-vision-model"
|
|
else
|
|
image="ollama-with-models"
|
|
fi
|
|
|
|
echo "Starting Ollama with image: $image"
|
|
docker run -d --name ollama -p 11434:11434 docker.io/llamastack/$image
|
|
echo "Verifying Ollama status..."
|
|
timeout 30 bash -c 'while ! curl -s -L http://127.0.0.1:11434; do sleep 1 && echo "."; done'
|