-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
fix(opentelemetry): Ensure Sentry spans don't leak when tracing is disabled #18337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…sabled Currently, when using Sentry alongside a custom OpenTelemetry setup, any spans started via our Sentry.startSpanX apis leak into the OpenTelemetry setup even if tracing is disabled. This fix suppresses tracing for span creation via our startSpanX apis but ensures tracing is not suppressed within the callback so that for example custom OTel spans created within Sentry.startSpanX calls are not suppresed. Closes: #17826
size-limit report 📦
|
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
packages/opentelemetry/src/trace.ts
Outdated
|
|
||
| const spanOptions = getSpanOptions(options); | ||
|
|
||
| if (!hasSpansEnabled()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
h: This here looks like almost the same code as above - even the comments. Can this be moved into a reusable method?
The only different what I've seen is that the one above has () => span.end() in the handleCallbackErrors, not sure if this was intended.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opted not to de-duplicate here. In general startSpan and startSpanManual are basically identical other than how they end spans (auto-end vs manual-end).
Splitting this out into a common helper makes it harder to understand because of how the callback and the success callback are handled differently. I think it's a complicated abstraction for little gain.
Tbh I would rather not deduplicate here, the code is already easy to get lost in without having an extra abstraction in it.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up updating this, I guess we'll hardly ever see that much diversion between the two apis, so it probably doesn't hurt to keep this one dry.
| const spanOptions = getSpanOptions(options); | ||
|
|
||
| const span = tracer.startSpan(name, spanOptions, ctx); | ||
| if (!hasSpansEnabled()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
l: what about following to have the same return (so we don't have to touch two parts of the code in case the return changes (or a nested ternary):
let context = ctx;
if (!hasSpansEnabled()) {
context = isTracingSuppressed(ctx) ? ctx : suppressTracing(ctx);
}
return tracer.startSpan(name, spanOptions, context);There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated.
…ans when using `only-if-parent: true`
logaretm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice one, I was going to raise up the duplication thing but your argument makes sense.
|
Drafting this for now, will investigate the failing log tests next week. |
|
The integration tests were previously failing because with suppressed spans, the trace id lookup fell back to the scope. I updated the integration tests to specifically enable tracing and added an e2e tests to showcase different trace Ids in a custom OTel setup that has a http instrumentation, so we get different traces via the spans created from that instrumentation. We were briefly discussing how scopes should handle trace ids, but landed on not changing behavior for now. |
Currently, when using Sentry alongside a custom OpenTelemetry setup, any spans started via our Sentry.startSpanX apis leak into the OpenTelemetry setup even if tracing is disabled.
This fix suppresses tracing for span creation via our startSpanX apis but ensures tracing is not suppressed within the callback so that for example custom OTel spans created within Sentry.startSpanX calls are not suppresed.
I update the
node-otel-without-tracinge2e tests to reflect that no Sentry spans leak into the OTLP endpoint, as well as trying this out locally with an express app and Jaeger.Before the fix, Sentry span leaks:


After the fix, no Sentry span leakage


Closes: #17826