-
Notifications
You must be signed in to change notification settings - Fork 619
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Start creating connections (edges) with large nonces #7966
Changes from 15 commits
50208b6
115fffd
2703a0b
8c7c4c3
320402b
e0db326
6a8c8ed
3b7e0c6
24f7966
b37e600
49a3359
2222501
8156746
6489c8d
1286eb1
d3a5e30
f0c1556
44aa9fd
6cd4967
965973b
07487b2
c3de08d
2afda3d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -193,11 +193,22 @@ impl NetworkState { | |
self.runtime.handle.spawn(fut.in_current_span()) | ||
} | ||
|
||
pub fn propose_edge(&self, peer1: &PeerId, with_nonce: Option<u64>) -> PartialEdgeInfo { | ||
pub fn propose_edge( | ||
&self, | ||
clock: &time::Clock, | ||
peer1: &PeerId, | ||
with_nonce: Option<u64>, | ||
) -> PartialEdgeInfo { | ||
// When we create a new edge we increase the latest nonce by 2 in case we miss a removal | ||
// proposal from our partner. | ||
let nonce = with_nonce.unwrap_or_else(|| { | ||
self.routing_table_view.get_local_edge(peer1).map_or(1, |edge| edge.next()) | ||
let nonce = Edge::create_fresh_nonce(clock); | ||
// If we already had a connection to this peer - check that edge's nonce. | ||
// And use either that one or the one from the current timestamp. | ||
// We would use existing edge's nonce, if we were trying to connect to a given peer multiple times per second. | ||
self.routing_table_view | ||
.get_local_edge(peer1) | ||
.map_or(nonce, |edge| std::cmp::max(edge.next(), nonce)) | ||
}); | ||
PartialEdgeInfo::new(&self.config.node_id(), peer1, nonce, &self.config.node_key) | ||
} | ||
|
@@ -290,7 +301,7 @@ impl NetworkState { | |
// Verify and broadcast the edge of the connection. Only then insert the new | ||
// connection to TIER2 pool, so that nothing is broadcasted to conn. | ||
// TODO(gprusak): consider actually banning the peer for consistency. | ||
this.add_edges(&clock, vec![conn.edge.clone()]) | ||
this.add_edges(&clock, vec![conn.edge.load()]) | ||
.await | ||
.map_err(|_: ReasonForBan| RegisterPeerError::InvalidEdge)?; | ||
this.tier2.insert_ready(conn.clone()).map_err(RegisterPeerError::PoolError)?; | ||
|
@@ -510,7 +521,13 @@ impl NetworkState { | |
}) | ||
}) | ||
.await; | ||
this.routing_table_view.add_local_edges(&edges); | ||
let new_local_edges = this.routing_table_view.add_local_edges(&edges); | ||
|
||
// Local edge information is also kept in the Connection fields in peers. | ||
// Make sure that it gets updated too. | ||
for new_edge in new_local_edges { | ||
this.tier2.update_edge(&new_edge); | ||
} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. local_edges are the edges of the graph adjacent to this node. Given that now the edge is supposed to change dynamically and that it is delivered to a node via SyncRoutingTable message it doesn't make sense to duplicate it I guess. We can either:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah, we could remove the edge from the connection -- but I'd do that in a separate PR (as this one is becoming too large) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ack, please add a TODO There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. also you unnecesarily take a lock on the whole pool in Pool::update_edge. Instead just update the edge in-place. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. and for that you might want ArcMutex/ArcSwap instead of AtomicCell. |
||
|
||
let mut new_edges = match this | ||
.routing_table_addr | ||
|
@@ -603,6 +620,7 @@ impl NetworkState { | |
&self.config.node_key, | ||
edge_info.signature, | ||
); | ||
self.tier2.update_edge(&edge); | ||
self.add_edges(&clock, vec![edge.clone()]).await?; | ||
Ok(edge) | ||
} | ||
|
@@ -660,7 +678,7 @@ impl NetworkState { | |
PartialEdgeInfo::new( | ||
&node_id, | ||
&conn.peer_info.id, | ||
edge.next(), | ||
std::cmp::max(Edge::create_fresh_nonce(&clock), edge.next()), | ||
&this.config.node_key, | ||
), | ||
))); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach you took to inject a nonce is totally valid and I agree with it, however on the other hand it has become quite intrusive, given that we need to pass around the nonce explicitly through a bunch of calls and it is used only for tests. Please consider using a lower level abstraction than PeerActor to force a custom nonce. I already planned to do that in my next PR: https://github.com/near/nearcore/pull/8076/files#diff-b691489bc9c0e1683942223dad298acaca321241108c799bf7376c641d75be1a . PTAL at chain/network/src/peer_manager/tests/nonce.rs there, to see whether it would also simplify your PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the with_nonce was only to fix that one test (that I see that you also fixed in your PR)