mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
Refs: https://github.com/llamastack/llama-stack/issues/3420 When telemetry is enabled the router uncondionally expects the usage attribute to be availble and fails if it is not present. Telemetry is not currently being requested by litellm_openai_mixin.py for streaming requests which means that providers like vertexai fail if telemetry is enabled and streaming is used. This is part of the required fix. Other part is in litell, will plan to submit PR for that soon. Signed-off-by: Michael Dawson <midawson@redhat.com> |
||
---|---|---|
.. | ||
__init__.py | ||
embedding_mixin.py | ||
inference_store.py | ||
litellm_openai_mixin.py | ||
model_registry.py | ||
openai_compat.py | ||
openai_mixin.py | ||
prompt_adapter.py |