llama-stack-mirror/llama_stack/providers
Ashwin Bharambe e9b4278a51
feat(responses)!: improve responses + conversations implementations (#3810)
This PR updates the Conversation item related types and improves a
couple critical parts of the implemenation:

- it creates a streaming output item for the final assistant message
output by
  the model. until now we only added content parts and included that
  message in the final response.

- rewrites the conversation update code completely to account for items
  other than messages (tool calls, outputs, etc.)

## Test Plan

Used the test script from
https://github.com/llamastack/llama-stack-client-python/pull/281 for
this

```
TEST_API_BASE_URL=http://localhost:8321/v1 \
  pytest tests/integration/test_agent_turn_step_events.py::test_client_side_function_tool -xvs
```
2025-10-15 09:36:11 -07:00
..
inline feat(responses)!: improve responses + conversations implementations (#3810) 2025-10-15 09:36:11 -07:00
registry feat: Enable setting a default embedding model in the stack (#3803) 2025-10-14 18:25:13 -07:00
remote feat(gemini): Support gemini-embedding-001 and fix models/ prefix in metadata keys (#3813) 2025-10-15 12:22:10 -04:00
utils feat(responses)!: improve responses + conversations implementations (#3810) 2025-10-15 09:36:11 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00