mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-06 18:40:57 +00:00
# What does this PR do? This PR introduces a reusable GitHub Actions workflow for pulling and running an Ollama model, with caching to avoid repeated downloads. [//]: # (If resolving an issue, uncomment and update the line below) Closes: #1949 ## Test Plan 1. Trigger a workflow that uses the Ollama setup. Confirm that: - The model is pulled successfully. - It is placed in the correct directory, official at the moment (not ~ollama/.ollama/models as per comment so need to confirm this). 2. Re-run the same workflow to validate that: - The model is restored from the cache. - Execution succeeds with the cached model. [//]: # (## Documentation) |
||
|---|---|---|
| .. | ||
| changelog.yml | ||
| gha_workflow_llama_stack_tests.yml | ||
| install-script-ci.yml | ||
| integration-auth-tests.yml | ||
| integration-tests.yml | ||
| pre-commit.yml | ||
| providers-build.yml | ||
| semantic-pr.yml | ||
| stale_bot.yml | ||
| test-external-providers.yml | ||
| tests.yml | ||
| unit-tests.yml | ||
| update-readthedocs.yml | ||