llama-stack-mirror/llama_stack/providers
Ben Browning dd57eff47b feat: Propagate W3C trace context headers from clients
This extracts the W3C trace context headers (traceparent and
tracestate) from incoming requests, stuffs them as attributes on the
spans we create, and uses them within the tracing provider
implementation to actually wrap our spans in the proper context.

What this means in practice is that when a client (such as an OpenAI
client) is instrumented to create these traces, we'll continue that
distributed trace within Llama Stack as opposed to creating our own
root span that breaks the distributed trace between client and server.

It's slightly awkward to do this in Llama Stack because our Tracing
API knows nothing about opentelemetry, W3C trace headers, etc - that's
only knowledge the specific provider implementation has. So, that's
why the trace headers get extracted by in the server code but not
actually used until the provider implementation to form the proper
context.

This also centralizes how we were adding the `__root__` and
`__root_span__` attributes, as those two were being added in different
parts of the code instead of from a single place.

Fixes #2097

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-13 08:36:58 -04:00
..
inline feat: Propagate W3C trace context headers from clients 2025-05-13 08:36:58 -04:00
registry feat(providers): sambanova updated to use LiteLLM openai-compat (#1596) 2025-05-06 16:50:22 -07:00
remote fix: revert "feat(provider): adding llama4 support in together inference provider (#2123)" (#2124) 2025-05-08 15:18:16 -07:00
tests chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
utils feat: Propagate W3C trace context headers from clients 2025-05-13 08:36:58 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00