llama-stack/llama_stack/providers
Ashwin Bharambe 5cdb29758a
feat(responses): add output_text delta events to responses (#2265)
This adds initial streaming support to the Responses API. 

This PR makes sure that the _first_ inference call made to chat
completions streams out.

There's more to be done:
 - tool call output tokens need to stream out when possible
- we need to loop through multiple rounds of inference and they all need
to stream out.

## Test Plan

Added a test. Executed as:

```
FIREWORKS_API_KEY=... \
  pytest -s -v 'tests/verifications/openai_api/test_responses.py' \
  --provider=stack:fireworks --model meta-llama/Llama-4-Scout-17B-16E-Instruct
```

Then, started a llama stack fireworks distro and tested against it like
this:

```
OPENAI_API_KEY=blah \
   pytest -s -v 'tests/verifications/openai_api/test_responses.py' \
   --base-url http://localhost:8321/v1/openai/v1 \
  --model meta-llama/Llama-4-Scout-17B-16E-Instruct 
```
2025-05-27 13:07:14 -07:00
..
inline feat(responses): add output_text delta events to responses (#2265) 2025-05-27 13:07:14 -07:00
registry feat: accept MCP authorization headers for MCP toolgroups (#2230) 2025-05-23 08:52:18 -07:00
remote fix: convert boolean string to boolean (#2284) 2025-05-27 13:05:38 -07:00
utils fix: match mcp headers in provider data to Responses API shape (#2263) 2025-05-25 14:33:10 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py fix(tools): do not index tools, only index toolgroups (#2261) 2025-05-25 13:27:52 -07:00