mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-15 16:02:43 +00:00
Implementation changes: - Add usage accumulation to StreamingResponseOrchestrator - Enable stream_options to receive usage in streaming chunks - Track usage across multi-turn responses with tool execution - Convert between chat completion and response usage formats - Extract usage accumulation into helper method for clarity Test changes: - Add usage assertions to streaming and non-streaming tests - Update test recordings with actual usage data from OpenAI 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| apis | ||
| cli | ||
| core | ||
| distributions | ||
| models | ||
| providers | ||
| strong_typing | ||
| testing | ||
| ui | ||
| __init__.py | ||
| env.py | ||
| log.py | ||
| schema_utils.py | ||