mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
This PR makes setting up Ollama optional for CI. By default, we use `replay` mode for inference requests and use the stored results from the `tests/integration/recordings/` directory. Every so often, users will update tests which will need us to re-record. To do this, we check for the existence of a label `re-record-tests` on the PR. If detected, - ollama is spun up - inference mode is set to record - after the tests are done, if any new changes are detected, they are pushed back to the PR ## Test Plan This is GitHub CI. Gotta test it live. |
||
---|---|---|
.. | ||
client-sdk/post_training | ||
common | ||
external | ||
integration | ||
unit | ||
verifications | ||
__init__.py | ||
Containerfile | ||
README.md |
Llama Stack Tests
Llama Stack has multiple layers of testing done to ensure continuous functionality and prevent regressions to the codebase.
Testing Type | Details |
---|---|
Unit | unit/README.md |
Integration | integration/README.md |
Verification | verifications/README.md |