llama-stack-mirror/llama_stack
Michael Dawson 80405da304 fix: ensure usage is requested if telemetry is enabled
Refs: https://github.com/llamastack/llama-stack/issues/3420

When telemetry is enabled the router uncondionally expects the
usage attribute to be availble and fails if it is not present.

Telemetry is not currently being requested by litellm_openai_mixin.py
for streaming requests which means that providers like vertexai
fail if telemetry is enabled and streaming is used.

This is part of the required fix. Other part is in litell, will
plan to submit PR for that soon.

Signed-off-by: Michael Dawson <midawson@redhat.com>
2025-09-26 15:50:11 -04:00
..
apis feat: create HTTP DELETE API endpoints to unregister ScoringFn and Benchmark resources in Llama Stack (#3371) 2025-09-15 12:43:38 -07:00
cli feat: migrate to FIPS-validated cryptographic algorithms (#3423) 2025-09-12 11:18:19 +02:00
core chore: fix build (#3522) 2025-09-22 22:53:48 -07:00
distributions feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
models refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
providers fix: ensure usage is requested if telemetry is enabled 2025-09-26 15:50:11 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
testing fix: Update inference recorder to handle both Ollama and OpenAI model (#3470) 2025-09-21 09:32:39 -04:00
ui chore(ui-deps): bump jest-environment-jsdom from 29.7.0 to 30.1.2 in /llama_stack/ui (#3509) 2025-09-22 13:57:10 +02:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
schema_utils.py feat(auth): API access control (#2822) 2025-07-24 15:30:48 -07:00