-
Notifications
You must be signed in to change notification settings - Fork 805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(sdk-trace-base): Export processed spans while exporter failed #4287
Conversation
@@ -221,7 +221,7 @@ export abstract class BatchSpanProcessorBase<T extends BufferConfig> | |||
const flush = () => { | |||
this._isExporting = true; | |||
this._flushOneBatch() | |||
.then(() => { | |||
.finally(() => { | |||
this._isExporting = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both the finally
and catch
set _isExporting
. Initially I removed the assignment from the catch
, but that gave me slight nervous jitters
While the exporter deals with a batch of spans, new spans may come in and wait to be exported. As previously implemented, a successful export would notice these waiting spans, triggering a renewed timer check, but not so for an unsuccessful export. The result was that, prior to this commit, a failing export may end up in a situation where no further spans will be exported. This is due to the behaviour of `_addToBuffer` when the queue is full: Imagine an export which fails after a long timeout (because of, for instance, network troubles). While the connection waits to be timed out, the span queue fills up. Once completely full, no new calls to recheck the timer will be done. On its own, this behaviour is fine. When combined with the patched bug, this leads to a rather confusing case where the exporter never tries exporting.
e6d8d3e
to
2f1d7a5
Compare
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #4287 +/- ##
==========================================
+ Coverage 92.20% 92.21% +0.01%
==========================================
Files 336 336
Lines 9515 9515
Branches 2017 2017
==========================================
+ Hits 8773 8774 +1
+ Misses 742 741 -1
|
This PR is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, the PR must've gotten lost in my inbox.
Looks good, thanks for taking care of this.
…en-telemetry#4287) * fix(sdk-trace-base): Export processed spans while exporter failed While the exporter deals with a batch of spans, new spans may come in and wait to be exported. As previously implemented, a successful export would notice these waiting spans, triggering a renewed timer check, but not so for an unsuccessful export. The result was that, prior to this commit, a failing export may end up in a situation where no further spans will be exported. This is due to the behaviour of `_addToBuffer` when the queue is full: Imagine an export which fails after a long timeout (because of, for instance, network troubles). While the connection waits to be timed out, the span queue fills up. Once completely full, no new calls to recheck the timer will be done. On its own, this behaviour is fine. When combined with the patched bug, this leads to a rather confusing case where the exporter never tries exporting. * fix(changelog): add entry --------- Co-authored-by: Marc Pichler <marc.pichler@dynatrace.com>
Which problem is this PR solving?
While the exporter deals with a batch of spans, new spans may come in and wait to be exported. As previously implemented, a successful export would notice these waiting spans, triggering a renewed timer check, but not so for an unsuccessful export.
The result was that, prior to this commit, a failing export may end up in a situation where no further spans will be exported. This is due to the behaviour of
_addToBuffer
when the queue is full: Imagine an export which fails after a long timeout (because of, for instance, network troubles). While the connection waits to be timed out, the span queue fills up. Once completely full, no new calls to recheck the timer will be done. On its own, this behaviour is fine. When combined with the patched bug, this leads to a rather confusing case where the exporter never tries exporting.Short description of the changes
This PR makes sure these spans are acknowledged and handled, even when exporting fails.
@pichlermarc you've mentioned working on refactoring the exporter; here's an interesting edge case.
Type of change
Please delete options that are not relevant.
How Has This Been Tested?
Checklist: