mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
Having to run (and re-run) a server while running verifications can be annoying while you are iterating on code. This makes it so you can use the library client -- and because it is OpenAI client compatible, it all works. ## Test Plan ``` pytest -s -v tests/verifications/openai_api/test_responses.py \ --provider=stack:together \ --model meta-llama/Llama-4-Scout-17B-16E-Instruct ``` |
||
|---|---|---|
| .. | ||
| client-sdk/post_training | ||
| external-provider/llama-stack-provider-ollama | ||
| integration | ||
| unit | ||
| verifications | ||
| __init__.py | ||
| README.md | ||
Llama Stack Tests
Llama Stack has multiple layers of testing done to ensure continuous functionality and prevent regressions to the codebase.
| Testing Type | Details |
|---|---|
| Unit | unit/README.md |
| Integration | integration/README.md |
| Verification | verifications/README.md |