llama-stack-mirror/llama_stack/providers/inline
Charlie Doern 6bcc1ad205 feat: telemetry logging fixes
this does a few things:

1. fixes `on_start` so that all span [START] and [END] is printed. not just [END]
2. change `log.py` to set the default `telemetry` category to WARN not INFO

This allows us to keep the metric logging and the verbosity of seeing the span [START] and [END] but by default hides it from normal users.

This conforms to our logging system since a user just need to switch the category to INFO to see the logs

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-08-15 17:47:26 -04:00
..
agents Revert "refactor(agents): migrate to OpenAI chat completions API" (#3167) 2025-08-15 12:01:07 -07:00
datasetio chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
eval chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
files/localfs chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
inference chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
safety chore: Change moderations api response to Provider returned categories (#3098) 2025-08-13 09:47:35 -07:00
scoring chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
telemetry feat: telemetry logging fixes 2025-08-15 17:47:26 -04:00
tool_runtime feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00
vector_io chore(tests): fix responses and vector_io tests (#3119) 2025-08-12 16:15:53 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00