mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-13 21:32:40 +00:00
Add OpenAI-compatible usage tracking types: - OpenAIChatCompletionUsage with prompt/completion token counts - OpenAIResponseUsage with input/output token counts - Token detail types for cached_tokens and reasoning_tokens - Add usage field to chat completion and response objects This enables reporting token consumption for both streaming and non-streaming responses, matching OpenAI's usage reporting format. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| apis | ||
| cli | ||
| core | ||
| distributions | ||
| models | ||
| providers | ||
| strong_typing | ||
| testing | ||
| ui | ||
| __init__.py | ||
| env.py | ||
| log.py | ||
| schema_utils.py | ||