feat(responses)!: improve responses + conversations implementations (#3810)

This PR updates the Conversation item related types and improves a
couple critical parts of the implemenation:

- it creates a streaming output item for the final assistant message
output by
  the model. until now we only added content parts and included that
  message in the final response.

- rewrites the conversation update code completely to account for items
  other than messages (tool calls, outputs, etc.)

## Test Plan

Used the test script from
https://github.com/llamastack/llama-stack-client-python/pull/281 for
this

```
TEST_API_BASE_URL=http://localhost:8321/v1 \
  pytest tests/integration/test_agent_turn_step_events.py::test_client_side_function_tool -xvs
```
This commit is contained in:
Ashwin Bharambe 2025-10-15 09:36:11 -07:00 committed by GitHub
parent add8cd801b
commit e9b4278a51
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
129 changed files with 86266 additions and 903 deletions

View file

@ -212,7 +212,7 @@ multi_turn_image_test_cases = [
),
(
"What country do you find this animal primarily in? What continent?",
"peru",
"south america",
),
],
),