mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-13 18:12:36 +00:00
Add OpenAI-compatible usage tracking types: - OpenAIChatCompletionUsage with prompt/completion token counts - OpenAIResponseUsage with input/output token counts - Token detail types for cached_tokens and reasoning_tokens - Add usage field to chat completion and response objects This enables reporting token consumption for both streaming and non-streaming responses, matching OpenAI's usage reporting format. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| img | ||
| providers/vector_io | ||
| deprecated-llama-stack-spec.html | ||
| deprecated-llama-stack-spec.yaml | ||
| experimental-llama-stack-spec.html | ||
| experimental-llama-stack-spec.yaml | ||
| llama-stack-spec.html | ||
| llama-stack-spec.yaml | ||
| remote_or_local.gif | ||
| safety_system.webp | ||
| site.webmanifest | ||
| stainless-llama-stack-spec.html | ||
| stainless-llama-stack-spec.yaml | ||