llama-stack-mirror/.github/workflows
Sébastien Han c8b5774ff3
ci: use ollama container image with loaded models
Instead of downloading the models each time we now have a single Ollama
container that is baked with the models pulled and ready to use.

This will remove the CI flakiness on model pulling.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-06 11:54:22 +02:00
..
changelog.yml ci: pin github actions to hashes (#1776) 2025-04-01 17:09:39 +02:00
gha_workflow_llama_stack_tests.yml chore: fix hash for thollander/actions-comment-pull-request (#1900) 2025-04-09 10:10:07 +02:00
install-script-ci.yml feat: Llama Stack Meta Reference installation script (#1383) 2025-04-28 11:25:59 +02:00
integration-auth-tests.yml fix: use proper service account for kube auth (#2227) 2025-05-21 15:28:21 -07:00
integration-tests.yml ci: use ollama container image with loaded models 2025-06-06 11:54:22 +02:00
pre-commit.yml ci: enable ruff output format for github (#2214) 2025-05-20 09:04:03 -07:00
providers-build.yml fix: vllm starter name (#2392) 2025-06-04 16:21:36 +02:00
semantic-pr.yml ci: pin github actions to hashes (#1776) 2025-04-01 17:09:39 +02:00
stale_bot.yml ci: pin github actions to hashes (#1776) 2025-04-01 17:09:39 +02:00
test-external-providers.yml chore: refactor workflow writting (#2225) 2025-05-21 17:31:14 +02:00
tests.yml ci: pin github actions to hashes (#1776) 2025-04-01 17:09:39 +02:00
unit-tests.yml chore: refactor workflow writting (#2225) 2025-05-21 17:31:14 +02:00
update-readthedocs.yml chore: refactor workflow writting (#2225) 2025-05-21 17:31:14 +02:00