feat: allow using llama-stack-library-client from verifications (#2238)

Having to run (and re-run) a server while running verifications can be
annoying while you are iterating on code. This makes it so you can use
the library client -- and because it is OpenAI client compatible, it all
works.

## Test Plan

```
pytest -s -v tests/verifications/openai_api/test_responses.py \
   --provider=stack:together \
   --model meta-llama/Llama-4-Scout-17B-16E-Instruct
```
This commit is contained in:
Ashwin Bharambe 2025-05-23 11:43:41 -07:00 committed by GitHub
parent 558d109ab7
commit 6463ee7633
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 31 additions and 7 deletions

View file

@ -25,6 +25,11 @@ def pytest_addoption(parser):
action="store",
help="Provider to use for testing",
)
parser.addoption(
"--model",
action="store",
help="Model to use for testing",
)
pytest_plugins = [