mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-21 03:59:42 +00:00
Instead of downloading the models each time we now have a single Ollama container that is baked with the models pulled and ready to use. This will remove the CI flakiness on model pulling. Signed-off-by: Sébastien Han <seb@redhat.com> |
||
---|---|---|
.. | ||
actions | ||
ISSUE_TEMPLATE | ||
workflows | ||
CODEOWNERS | ||
dependabot.yml | ||
PULL_REQUEST_TEMPLATE.md | ||
TRIAGERS.md |