-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix timer handle churn in websocket heartbeat #8608
Conversation
Each message would cancel and reset the timer handler which meant asyncio would have to rebuild the whole schedule heap frequently because so many timer handlers were being created and than cancelled. When asyncio timer handle cancels reach the threadhold the whole heap has to be rebuilt https://github.com/python/cpython/blob/1422500d020bd199b26357fc387f8b79b82226cd/Lib/asyncio/base_events.py#L1968 which is extremely inefficent. To solve this, when a timer handle is already running, instead of cancelling it, we let it fire and reschedule if heartbeat would be sent to early.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #8608 +/- ##
==========================================
+ Coverage 97.90% 97.93% +0.03%
==========================================
Files 107 107
Lines 33515 33669 +154
Branches 3941 3952 +11
==========================================
+ Hits 32813 32974 +161
+ Misses 518 515 -3
+ Partials 184 180 -4
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
We don't have any tests for when the heartbeat would get rescheduled. Need to write some. |
95.2% reduction in websocket client timer handler churn. That's even with the downstream libs that already have their own ping/pong implementation. I'm happy with this. I'll let it bake overnight and than write some tests tomorrow if I don't have any more fires from my day job to put out. |
95.1% reduction on the other instance I tested this on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. I'm about to update these test files with type checking though, so would be great if you can update after those changes.
Co-authored-by: Sam Bull <git@sambull.org>
Thanks! |
Backport to 3.10: 💔 cherry-picking failed — conflicts found❌ Failed to cleanly apply c4acabc on top of patchback/backports/3.10/c4acabc836ab969e95199aa976e85c01df720a27/pr-8608 Backporting merged PR #8608 into master
🤖 @patchback |
Backport to 3.11: 💔 cherry-picking failed — conflicts found❌ Failed to cleanly apply c4acabc on top of patchback/backports/3.11/c4acabc836ab969e95199aa976e85c01df720a27/pr-8608 Backporting merged PR #8608 into master
🤖 @patchback |
Co-authored-by: Sam Bull <git@sambull.org> (cherry picked from commit c4acabc)
Co-authored-by: Sam Bull <git@sambull.org> (cherry picked from commit c4acabc)
What do these changes do?
Each message would cancel and reset the timer handler which meant asyncio would have to rebuild the whole schedule heap frequently because so many timer handlers were being created and than cancelled.
When asyncio timer handle cancels reach the threadhold the whole heap has to be rebuilt
https://github.com/python/cpython/blob/1422500d020bd199b26357fc387f8b79b82226cd/Lib/asyncio/base_events.py#L1968 which is extremely inefficent.
To solve this, when a timer handle is already running, instead of cancelling it, we let it fire and reschedule if heartbeat would be sent to early.
The websocket ping/pong code was missing some coverage in this area so there are some new tests as well.
Are there changes in behavior for the user?
no
Is it a substantial burden for the maintainers to support this?
no