llama-stack-mirror/.github/actions
Sébastien Han c8b5774ff3
ci: use ollama container image with loaded models
Instead of downloading the models each time we now have a single Ollama
container that is baked with the models pulled and ready to use.

This will remove the CI flakiness on model pulling.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-06 11:54:22 +02:00
..
setup-ollama ci: use ollama container image with loaded models 2025-06-06 11:54:22 +02:00
setup-runner ci: run integration test on more python version (#2400) 2025-06-05 20:40:21 +02:00