mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-16 23:29:28 +00:00
Implementation changes: - Add usage accumulation to StreamingResponseOrchestrator - Enable stream_options to receive usage in streaming chunks - Track usage across multi-turn responses with tool execution - Convert between chat completion and response usage formats - Extract usage accumulation into helper method for clarity Test changes: - Add usage assertions to streaming and non-streaming tests - Update test recordings with actual usage data from OpenAI 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| responses | ||
| __init__.py | ||
| agent_instance.py | ||
| agents.py | ||
| config.py | ||
| persistence.py | ||
| safety.py | ||