Description
BlockSyncManager
's Peer
tasks store requested block ids in blocks_queue
. Then, on each iteration of the "main" loop, one of the ids is popped from the queue and the corresponding BlockResponse
message is created and sent to the backend via MessagingService
. The problem is that the rate at which BlockResponse
s are produced is not tied in any way to the rate at which they are sent through the socket. So, the main loop will run through blocks_queue
very quickly and create a bunch of BlockResponse
's, which will hang in memory until the socket is able to process them. The maximum number of blocks in a block request is 500 and the maximum size of a block is 1Mb, so a single peer may eat up to 500Mb of memory on the node. Assuming that the peer is malicious and that there can be many of them, they can eat up several Gb of memory on the node, potentially leading to DoS.
At this moment though our blocks are much less than 1Mb, so this issue has a low impact for now.
P.S. There was an idea to use a bounded channel in the implementation of MessagingService
to solve this issue. It might not be a good idea though, because:
- There is an additional channel in the backend through which it forwards
SyncMessage
s to one of its ownPeer
tasks, which in turn sends them through the socket. So, that channel will have to be made bounded as well (and if an additional intermediate channel is introduced in the future, it'll have to be bounded too). - Filling up a bounded channel will freeze the sender if the receiver is not able to retrieve the messages fast enough. This is especially scary in the case of the backend channel, where a frozen
Peer
may freeze the entire backend.
Instead, we should perhaps have a way for the backend to inform BlockSyncManager
's Peer
tasks that a certian message has already been sent through the socket, which can be used as a criterion for creating additional BlockResponse
messages.