llama-stack-mirror/llama_stack/providers/utils/telemetry
ehhuang f6bf36343d
chore: logging perf improvments (#3393)
# What does this PR do?
- Use BackgroundLogger when logging metric events.
- Reuse event loop in BackgroundLogger

## Test Plan
```
cd /docs/source/distributions/k8s-benchmark
# start mock server
python openai-mock-server.py --port 8000
# start stack server
LLAMA_STACK_LOGGING="all=WARNING" uv run --with llama-stack python -m llama_stack.core.server.server docs/source/distributions/k8s-benchmark/stack_run_config.yaml
# run benchmark script
uv run python3 benchmark.py --duration 120 --concurrent 50 --base-url=http://localhost:8321/v1/openai/v1 --model=vllm-inference/meta-llama/Llama-3.2-3B-Instruct
```
### RPS from 57 -> 62
2025-09-10 11:52:23 -07:00
..
__init__.py kill unnecessarily large imports from telemetry init 2024-12-08 16:57:16 -08:00
dataset_mixin.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
sqlite_trace_store.py feat: implement query_metrics (#3074) 2025-08-22 14:19:24 -07:00
trace_protocol.py chore: update pre-commit hook versions (#2708) 2025-07-10 16:47:59 +02:00
tracing.py chore: logging perf improvments (#3393) 2025-09-10 11:52:23 -07:00