llama-stack/tests/verifications/openai_api/fixtures
Ashwin Bharambe 6463ee7633
feat: allow using llama-stack-library-client from verifications (#2238)
Having to run (and re-run) a server while running verifications can be
annoying while you are iterating on code. This makes it so you can use
the library client -- and because it is OpenAI client compatible, it all
works.

## Test Plan

```
pytest -s -v tests/verifications/openai_api/test_responses.py \
   --provider=stack:together \
   --model meta-llama/Llama-4-Scout-17B-16E-Instruct
```
2025-05-23 11:43:41 -07:00
..
images test: add multi_image test (#1972) 2025-04-17 12:51:42 -07:00
test_cases fix: Make search tool talk about models (#2151) 2025-05-13 22:41:51 -07:00
__init__.py feat(verification): various improvements (#1921) 2025-04-10 10:26:19 -07:00
fixtures.py feat: allow using llama-stack-library-client from verifications (#2238) 2025-05-23 11:43:41 -07:00
load.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00