llama-stack/tests/verifications/openai_api
Ashwin Bharambe 6463ee7633
feat: allow using llama-stack-library-client from verifications (#2238)
Having to run (and re-run) a server while running verifications can be
annoying while you are iterating on code. This makes it so you can use
the library client -- and because it is OpenAI client compatible, it all
works.

## Test Plan

```
pytest -s -v tests/verifications/openai_api/test_responses.py \
   --provider=stack:together \
   --model meta-llama/Llama-4-Scout-17B-16E-Instruct
```
2025-05-23 11:43:41 -07:00
..
fixtures feat: allow using llama-stack-library-client from verifications (#2238) 2025-05-23 11:43:41 -07:00
__init__.py feat(verification): various improvements (#1921) 2025-04-10 10:26:19 -07:00
conftest.py feat: allow using llama-stack-library-client from verifications (#2238) 2025-05-23 11:43:41 -07:00
test_chat_completion.py feat: OpenAI Responses API (#1989) 2025-04-28 14:06:00 -07:00
test_responses.py feat: function tools in OpenAI Responses (#2094) 2025-05-13 11:29:15 -07:00