Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed a possible deadlock issue in DefaultAggregationPeriodCycle #1025

Closed
wants to merge 1 commit into from

Conversation

dbpprt
Copy link

@dbpprt dbpprt commented Dec 6, 2018

The existing method took a threadpool thread and locked
the thread in a while (true) loop.

For significant contributions please make sure you have completed the following items:

  • Design discussion issue #
  • Changes in public surface reviewed
  • CHANGELOG.md updated with one line description of the fix, and a link to the original issue.
  • The PR will trigger build, unit test, and functional tests automatically. If your PR was submitted from fork - mention one of committers to initiate the build for you.

The existing method took a threadpool thread and locked
the thread in a while (true) loop.
@msftclas
Copy link

msftclas commented Dec 6, 2018

CLA assistant check
All CLA requirements met.

@dbpprt
Copy link
Author

dbpprt commented Dec 6, 2018

Important: this is "untested" under real circumstances, just a possible fix for the issue.

@dbpprt
Copy link
Author

dbpprt commented Dec 6, 2018

This is related to #850 from our observations.

while (true)
{
DateTimeOffset now = DateTimeOffset.Now;
TimeSpan waitPeriod = GetNextCycleTargetTime(now) - now;

Task.Delay(waitPeriod).ConfigureAwait(continueOnCapturedContext: false).GetAwaiter().GetResult();
await Task.Delay(waitPeriod).ConfigureAwait(continueOnCapturedContext: false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct me if i am wrong here: The original intention for this blocking was so that this TP thread never gets to go back to the pool and do other tasks. This is done to ensure that MetricAggregator always have this TP thread available and aggregation works even when system is otherwise running low on TP threads.
This proposed change would alter the intent here..

My alternate proposal was for MetricAggregator to create a new Thread() in the startup, and keep that thread to itself - sleeping and waking up on it - this should ensure that this thread will be available for metric aggregation always, even if TP is too busy. And also this means no conflicts with anything else as this Thread will only run metric aggregator code.
Related: #413 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree the current proposal will likely solve conflict with other threads, but i'd prefer to use new Thread() approach to ensure that telemetry gets sent even when TP is busy. (We dont want telemetry to be lost at a time when its most critical)
Please share your thoughts.

@macrogreg PLease have a look here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My solution uses the the default TP which may lead to unwritten telemetry on high load systems whenever the TP injects new threads to slowly.

Don't get me wrong here but I think using a dedicated thread here is just a workaround. I'm currently thinking about a dedicated scheduler with 1 worker thread to solve this problem efficiently. This will solve all starvation issues as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi folks,
Unfortunately, we cannot take this PR as is. We purposefully dedicate a thread to aggregation to avoid thread pool starvation issues. This change would break this.

@dennisbappert : A dedicated scheduler would need to create it's own thread anyway, and so setting one up would be an overkill here.

We suspect that somehow application code got executed before the first aggregation was run and thus before the aggregation thread was dedicated. If a synchronization context is trying to complete a task on a specific thread and that is the thread that happened to be used for aggregation, we could observe the described effect.

I have not way for knowing for sure, but I suspect that the issue is caused by the Nito.AsyncEx library. It offers non-standard task schedulers which is usually very tricky. I suggest that we have the aggregation thread be a dedicated thread never seen by the thread pool - we can see if this helps...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@macrogreg
Copy link
Contributor

Hi folks,
Unfortunately, we cannot take this PR as is. We purposefully dedicate a thread to aggregation to avoid thread pool starvation issues. This change would break this.

We suspect that somehow application code got executed before the first aggregation was run and thus before the aggregation thread was dedicated. If a synchronization context is trying to complete a task on a specific thread and that is the thread that happened to be used for aggregation, we could observe the described effect.

I have not way for knowing for sure, but I suspect that the issue is caused by the Nito.AsyncEx library. It offers non-standard task schedulers which is usually very tricky. I suggest that we have the aggregation thread be a dedicated thread never seen by the thread pool - we can see if this helps...

@macrogreg macrogreg closed this Dec 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants