mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-15 18:43:11 +00:00
Implementation changes: - Add usage accumulation to StreamingResponseOrchestrator - Enable stream_options to receive usage in streaming chunks - Track usage across multi-turn responses with tool execution - Convert between chat completion and response usage formats - Extract usage accumulation into helper method for clarity Test changes: - Add usage assertions to streaming and non-streaming tests - Update test recordings with actual usage data from OpenAI |
||
|---|---|---|
| .. | ||
| inline | ||
| registry | ||
| remote | ||
| utils | ||
| __init__.py | ||
| datatypes.py | ||