llama-stack-mirror/llama_stack/providers/inline/agents/meta_reference
Ashwin Bharambe 30ba8c8655
fix(responses): sync conversation before yielding terminal events in streaming (#3888)
Move conversation sync logic before yield to ensure it executes even
when
streaming consumers break early after receiving response.completed
event.

## Test Plan

```
OLLAMA_URL=http://localhost:11434 \
  pytest -sv tests/integration/responses/ \
  --stack-config server:ci-tests \
  --text-model ollama/llama3.2:3b-instruct-fp16 \
  --inference-mode live \
  -k conversation_multi
```

This test now passes.
2025-10-22 14:31:12 -07:00
..
responses fix(responses): sync conversation before yielding terminal events in streaming (#3888) 2025-10-22 14:31:12 -07:00
__init__.py chore!: remove telemetry API usage (#3815) 2025-10-16 10:39:32 -07:00
agent_instance.py feat(api)!: BREAKING CHANGE: support passing extra_body through to providers (#3777) 2025-10-10 16:21:44 -07:00
agents.py feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
config.py feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
persistence.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
safety.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00