llama-stack/tests/verifications/openai_api/fixtures
Ashwin Bharambe 5cdb29758a
feat(responses): add output_text delta events to responses (#2265)
This adds initial streaming support to the Responses API. 

This PR makes sure that the _first_ inference call made to chat
completions streams out.

There's more to be done:
 - tool call output tokens need to stream out when possible
- we need to loop through multiple rounds of inference and they all need
to stream out.

## Test Plan

Added a test. Executed as:

```
FIREWORKS_API_KEY=... \
  pytest -s -v 'tests/verifications/openai_api/test_responses.py' \
  --provider=stack:fireworks --model meta-llama/Llama-4-Scout-17B-16E-Instruct
```

Then, started a llama stack fireworks distro and tested against it like
this:

```
OPENAI_API_KEY=blah \
   pytest -s -v 'tests/verifications/openai_api/test_responses.py' \
   --base-url http://localhost:8321/v1/openai/v1 \
  --model meta-llama/Llama-4-Scout-17B-16E-Instruct 
```
2025-05-27 13:07:14 -07:00
..
images test: add multi_image test (#1972) 2025-04-17 12:51:42 -07:00
test_cases feat(responses): add output_text delta events to responses (#2265) 2025-05-27 13:07:14 -07:00
__init__.py feat(verification): various improvements (#1921) 2025-04-10 10:26:19 -07:00
fixtures.py feat: allow using llama-stack-library-client from verifications (#2238) 2025-05-23 11:43:41 -07:00
load.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00