forked from phoenix-oss/llama-stack-mirror
feat: allow using llama-stack-library-client from verifications (#2238)
Having to run (and re-run) a server while running verifications can be annoying while you are iterating on code. This makes it so you can use the library client -- and because it is OpenAI client compatible, it all works. ## Test Plan ``` pytest -s -v tests/verifications/openai_api/test_responses.py \ --provider=stack:together \ --model meta-llama/Llama-4-Scout-17B-16E-Instruct ```
This commit is contained in:
parent
558d109ab7
commit
6463ee7633
3 changed files with 31 additions and 7 deletions
|
@ -25,6 +25,11 @@ def pytest_addoption(parser):
|
|||
action="store",
|
||||
help="Provider to use for testing",
|
||||
)
|
||||
parser.addoption(
|
||||
"--model",
|
||||
action="store",
|
||||
help="Model to use for testing",
|
||||
)
|
||||
|
||||
|
||||
pytest_plugins = [
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue