mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
# What does this PR do? This should be more robust as sometimes its run without running build first. ## Test Plan OLLAMA_URL=http://localhost:11434 LLAMA_STACK_TEST_INFERENCE_MODE=replay LLAMA_STACK_TEST_RECORDING_DIR=tests/integration/recordings LLAMA_STACK_CONFIG=server:starter uv run --with pytest-repeat pytest tests/integration/telemetry --text-model="ollama/llama3.2:3b-instruct-fp16" -vvs |
||
|---|---|---|
| .. | ||
| client-sdk/post_training | ||
| common | ||
| containers | ||
| external | ||
| integration | ||
| unit | ||
| verifications | ||
| __init__.py | ||
| README.md | ||
Llama Stack Tests
Llama Stack has multiple layers of testing done to ensure continuous functionality and prevent regressions to the codebase.
| Testing Type | Details |
|---|---|
| Unit | unit/README.md |
| Integration | integration/README.md |
| Verification | verifications/README.md |