llama-stack-mirror/llama_stack/providers/inline/agents
Ashwin Bharambe cfb86f1178 feat(responses): implement usage tracking in streaming responses
Implementation changes:
- Add usage accumulation to StreamingResponseOrchestrator
- Enable stream_options to receive usage in streaming chunks
- Track usage across multi-turn responses with tool execution
- Convert between chat completion and response usage formats
- Extract usage accumulation into helper method for clarity

Test changes:
- Add usage assertions to streaming and non-streaming tests
- Update test recordings with actual usage data from OpenAI
2025-10-10 10:13:33 -07:00
..
meta_reference feat(responses): implement usage tracking in streaming responses 2025-10-10 10:13:33 -07:00
__init__.py add missing inits 2024-11-08 17:54:24 -08:00