-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed a possible deadlock issue in DefaultAggregationPeriodCycle #1025
Conversation
The existing method took a threadpool thread and locked the thread in a while (true) loop.
Important: this is "untested" under real circumstances, just a possible fix for the issue. |
This is related to #850 from our observations. |
while (true) | ||
{ | ||
DateTimeOffset now = DateTimeOffset.Now; | ||
TimeSpan waitPeriod = GetNextCycleTargetTime(now) - now; | ||
|
||
Task.Delay(waitPeriod).ConfigureAwait(continueOnCapturedContext: false).GetAwaiter().GetResult(); | ||
await Task.Delay(waitPeriod).ConfigureAwait(continueOnCapturedContext: false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct me if i am wrong here: The original intention for this blocking was so that this TP thread never gets to go back to the pool and do other tasks. This is done to ensure that MetricAggregator always have this TP thread available and aggregation works even when system is otherwise running low on TP threads.
This proposed change would alter the intent here..
My alternate proposal was for MetricAggregator to create a new Thread() in the startup, and keep that thread to itself - sleeping and waking up on it - this should ensure that this thread will be available for metric aggregation always, even if TP is too busy. And also this means no conflicts with anything else as this Thread will only run metric aggregator code.
Related: #413 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree the current proposal will likely solve conflict with other threads, but i'd prefer to use new Thread() approach to ensure that telemetry gets sent even when TP is busy. (We dont want telemetry to be lost at a time when its most critical)
Please share your thoughts.
@macrogreg PLease have a look here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My solution uses the the default TP which may lead to unwritten telemetry on high load systems whenever the TP injects new threads to slowly.
Don't get me wrong here but I think using a dedicated thread here is just a workaround. I'm currently thinking about a dedicated scheduler with 1 worker thread to solve this problem efficiently. This will solve all starvation issues as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi folks,
Unfortunately, we cannot take this PR as is. We purposefully dedicate a thread to aggregation to avoid thread pool starvation issues. This change would break this.
@dennisbappert : A dedicated scheduler would need to create it's own thread anyway, and so setting one up would be an overkill here.
We suspect that somehow application code got executed before the first aggregation was run and thus before the aggregation thread was dedicated. If a synchronization context is trying to complete a task on a specific thread and that is the thread that happened to be used for aggregation, we could observe the described effect.
I have not way for knowing for sure, but I suspect that the issue is caused by the Nito.AsyncEx library. It offers non-standard task schedulers which is usually very tricky. I suggest that we have the aggregation thread be a dedicated thread never seen by the thread pool - we can see if this helps...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dennisbappert
@cijothomas
Please see #1028
Hi folks, We suspect that somehow application code got executed before the first aggregation was run and thus before the aggregation thread was dedicated. If a synchronization context is trying to complete a task on a specific thread and that is the thread that happened to be used for aggregation, we could observe the described effect. I have not way for knowing for sure, but I suspect that the issue is caused by the Nito.AsyncEx library. It offers non-standard task schedulers which is usually very tricky. I suggest that we have the aggregation thread be a dedicated thread never seen by the thread pool - we can see if this helps... |
The existing method took a threadpool thread and locked
the thread in a while (true) loop.
For significant contributions please make sure you have completed the following items: