Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making sure that the resolved DNS lists get used in full #3071

Merged
merged 4 commits into from
Nov 23, 2021
Merged

Making sure that the resolved DNS lists get used in full #3071

merged 4 commits into from
Nov 23, 2021

Conversation

gezero
Copy link
Contributor

@gezero gezero commented Nov 16, 2021

Signed-off-by: Jiri Peinlich jiri.peinlich@gmail.com

PR description

Previously we would always process the list of DNS resolved peers from the beginning. We would
at most only try maxConnections valid elements from the list and never try more. So the each
time we might be trying the same 25 peers, we connect with them and then drop them. This PR
firstly shuffles the peers list each time before applying it and then also takes from the list for
as long as there are free peers slots available as opposed to guess the limit upfront and hope
that all the peers connect.

Changelog

@@ -376,6 +377,7 @@ void attemptPeerConnections() {
public Stream<DiscoveryPeer> streamDiscoveredPeers() {
List<DiscoveryPeer> peers = dnsPeers.get();
if (peers != null) {
Collections.shuffle(peers);
return Stream.concat(peerDiscoveryAgent.streamDiscoveredPeers(), peers.stream());
}
return peerDiscoveryAgent.streamDiscoveredPeers();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can also shuffle this part ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, unfortunately streams don't have a convenient method to shuffle the elements.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shuffling a stream goes against the "lazy" nature of a stream I would say. To shuffle it you would need to process all elements at least once. So you need to collect, shuffle, and convert result to another stream.

@gezero
Copy link
Contributor Author

gezero commented Nov 16, 2021

I am going to add a test covering the takeWhile part

@gezero gezero closed this Nov 16, 2021
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
Previously we would always process the list from the beginning. We would
only try maximumpeers elements from the list and never try more. This PR
firstly shuffles the peers list each time before applying it and then
also takes from the list for as long as there are free peers slots
available.

Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
@gezero gezero reopened this Nov 18, 2021
@gezero
Copy link
Contributor Author

gezero commented Nov 18, 2021

I added a test to check the takeWhile

Copy link
Contributor

@matkt matkt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@sonarqubecloud
Copy link

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

75.0% 75.0% Coverage
0.0% 0.0% Duplication

@macfarla macfarla merged commit b373379 into hyperledger:main Nov 23, 2021
eum602 pushed a commit to lacchain/besu that referenced this pull request Nov 3, 2023
…#3071)

* Checking that we try hard enough to get peers from dns-discovery

Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>

* Making sure that the resolved DNS lists get used in full

Previously we would always process the list from the beginning. We would
only try maximumpeers elements from the list and never try more. This PR
firstly shuffles the peers list each time before applying it and then
also takes from the list for as long as there are free peers slots
available.

Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>

Co-authored-by: Sally MacFarlane <sally.macfarlane@consensys.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants