Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dkg: strengthen logic for steps update #2797

Merged
merged 6 commits into from
Jan 19, 2024

Conversation

pinebit
Copy link
Contributor

@pinebit pinebit commented Jan 17, 2024

Improved logic for steps update in DKG process.

category: misc
ticket: none

Copy link

codecov bot commented Jan 17, 2024

Codecov Report

Attention: 3 lines in your changes are missing coverage. Please review.

Comparison is base (d2748d5) 53.27% compared to head (14ade9d) 53.22%.
Report is 1 commits behind head on main.

Files Patch % Lines
dkg/sync/server.go 88.88% 2 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2797      +/-   ##
==========================================
- Coverage   53.27%   53.22%   -0.06%     
==========================================
  Files         199      199              
  Lines       27616    27629      +13     
==========================================
- Hits        14713    14706       -7     
- Misses      11081    11097      +16     
- Partials     1822     1826       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

// Sleep for 100ms to let clients connect with each other.
time.Sleep(time.Millisecond * 100)
// Sleep for 250ms to let clients connect with each other.
// Must be at least two times greater than the sync messages period specified in client.go NewClient().
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please point me to the code where this is specified?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

period: 250 * time.Millisecond,

dkg/sync/client.go Outdated Show resolved Hide resolved
@@ -53,7 +53,7 @@ func NewClient(tcpNode host.Host, peer peer.ID, hashSig []byte, version version.
// supports reestablishing on relay circuit recycling, and supports soft shutdown.
type Client struct {
// Mutable state
mu sync.Mutex
mu sync.RWMutex
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need a read-write mutex here?

Copy link
Contributor Author

@pinebit pinebit Jan 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because in many places we need just read. Using RWMutex optimizes performance (less locks and waits) and is widely adopted in golang applications. However, I found that Charon often uses normal Mutex, and it is unclear why. I would wish we eventually replace everything to RWMutex where appropriate.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pinebit Absolutely agree that RWMutexes are more optimized. But wondering if we necessarily require this. In charon, we have tried to refrain from doing "premature optimizations" wherever possible, hence you would see more mutexes used than rw-mutexes.

By what magnitude do you believe using RWMutex can optimize the performance here?

Else, we can discuss this async and create a ticket to refactor mutexes in the codebase to use RWMutex in a separate PR

Copy link
Contributor Author

@pinebit pinebit Jan 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more about training muscle memory and develop good habits to use the appropriate API. Not necessary to optimize the performance in this particular case. Also, this improves readability - you show an intention. For example if I see RLock(), I can quickly realize this function should not modify the state and then I will think twice when introducing a change.

I do not suggest to go and rewrite the entire codebase to use RWLock right away, we can do this whenever we make a change in an affect file, for example, like adding/changing a getter or setter.

Copy link
Contributor

@xenowits xenowits Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example if I see RLock(), I can quickly realize this function should not modify the state and then I will think twice when introducing a change.

Yes precisely, this removes congnitive load!

Co-authored-by: Abhishek Kumar <43061995+xenowits@users.noreply.github.com>
dkg/sync/server.go Outdated Show resolved Hide resolved
pinebit and others added 3 commits January 18, 2024 11:38
Co-authored-by: Abhishek Kumar <43061995+xenowits@users.noreply.github.com>
Co-authored-by: Abhishek Kumar <43061995+xenowits@users.noreply.github.com>
Copy link

sonarcloud bot commented Jan 18, 2024

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code

See analysis details on SonarCloud

@pinebit pinebit added the merge when ready Indicates bulldozer bot may merge when all checks pass label Jan 19, 2024
@obol-bulldozer obol-bulldozer bot merged commit 226bdfc into main Jan 19, 2024
14 checks passed
@obol-bulldozer obol-bulldozer bot deleted the pineibt/dkg-steps-logic-fixed branch January 19, 2024 08:14
gsora pushed a commit that referenced this pull request Jan 24, 2024
Improved logic for steps update in DKG process.

category: misc
ticket: none
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
merge when ready Indicates bulldozer bot may merge when all checks pass qs-audit v0.19.1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants