llama-stack-mirror/llama_stack/providers
ehhuang 9936f33f7e
chore: disable telemetry if otel endpoint isn't set (#3859)
# What does this PR do?

removes error:
ConnectionError: HTTPConnectionPool(host='localhost', port=4318): Max
retries exceeded with url: /v1/traces
(Caused by NewConnectionError('<urllib3.connection.HTTPConnection object
at 0x10fd98e60>: Failed to establish a
         new connection: [Errno 61] Connection refused'))


## Test Plan
uv run llama stack run starter
curl http://localhost:8321/v1/models
observe no error in server logs
2025-10-20 11:42:57 -07:00
..
inline chore: disable telemetry if otel endpoint isn't set (#3859) 2025-10-20 11:42:57 -07:00
registry chore!: remove telemetry API usage (#3815) 2025-10-16 10:39:32 -07:00
remote docs: Documentation update for NVIDIA Inference Provider (#3840) 2025-10-20 09:51:43 -07:00
utils test(telemetry): Telemetry Tests (#3805) 2025-10-17 10:43:33 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00