mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-22 12:37:53 +00:00
Instead of downloading the models each time we now have a single Ollama container that is baked with the models pulled and ready to use. This will remove the CI flakiness on model pulling. Signed-off-by: Sébastien Han <seb@redhat.com> |
||
---|---|---|
.. | ||
changelog.yml | ||
gha_workflow_llama_stack_tests.yml | ||
install-script-ci.yml | ||
integration-auth-tests.yml | ||
integration-tests.yml | ||
pre-commit.yml | ||
providers-build.yml | ||
semantic-pr.yml | ||
stale_bot.yml | ||
test-external-providers.yml | ||
tests.yml | ||
unit-tests.yml | ||
update-readthedocs.yml |