llama-stack-mirror/llama_stack/providers/inline
Charlie Doern e2aefd797f fix: fix meta_reference telemetry console
currently, none of the inference metrics are printed when using the console.

This is because self._log_metric is only implemented if self.meter is not None.

acquire the lock, and add event to the span as the other `_log_...` method do

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-08-06 15:48:41 -04:00
..
agents chore: standardize session not found error (#3031) 2025-08-04 13:12:02 -07:00
datasetio chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
eval chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
files/localfs chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
inference chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
safety feat: create unregister shield API endpoint in Llama Stack (#2853) 2025-08-05 07:33:46 -07:00
scoring chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
telemetry fix: fix meta_reference telemetry console 2025-08-06 15:48:41 -04:00
tool_runtime feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00
vector_io refactor: Remove double filtering based on score threshold (#3019) 2025-08-02 15:57:03 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00