Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDFS-16479. EC: NameNode should not send a reconstruction work when the source datanodes are insufficient #4138

Merged
merged 6 commits into from
Apr 14, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -2163,6 +2163,15 @@ BlockReconstructionWork scheduleReconstruction(BlockInfo block,
return null;
}

// skip if source datanodes for reconstructing ec block are not enough
if (block.isStriped()) {
BlockInfoStriped stripedBlock = (BlockInfoStriped) block;
if (stripedBlock.getDataBlockNum() > srcNodes.length) {
Copy link
Member

@ayushtkn ayushtkn Apr 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a very quick look.
Just thinking about a scenario with say RS-6-3-1024k, and we just write 1 mb, in that case the total number of blocks available will be 1 Datablock + 3 Parity. In that case BG itself will have total 4 Blocks. Will this code start returning null? Not sure if getRealDataBlockNum helps here or not. If it is actually a problem

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ayushtkn Thanks for your review. You're right, it's a problem.
I updated the PR to calculate the real data block number. It is the same logic used in StripedReader. I also added one more unit test to cover the case.

LOG.debug("Block {} cannot be reconstructed due to shortage of source datanodes ", block);
return null;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we increment the metrics before returning null

    NameNode.getNameNodeMetrics().incNumTimesReReplicationNotScheduled();

}
}

// liveReplicaNodes can include READ_ONLY_SHARED replicas which are
// not included in the numReplicas.liveReplicas() count
assert liveReplicaNodes.size() >= numReplicas.liveReplicas();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -852,6 +852,49 @@ public void testChooseSrcDNWithDupECInDecommissioningNode() throws Exception {
0, numReplicas.redundantInternalBlocks());
}

@Test
public void testSkipReconstructionWithManyBusyNodes() {
long blockId = -9223372036854775776L; // real ec block id
// RS-3-2 EC policy
ErasureCodingPolicy ecPolicy =
SystemErasureCodingPolicies.getPolicies().get(1);
// striped blockInfo
Block aBlock = new Block(blockId, ecPolicy.getCellSize() * ecPolicy.getNumDataUnits(), 0);
BlockInfoStriped aBlockInfoStriped = new BlockInfoStriped(aBlock, ecPolicy);
// ec storageInfo
DatanodeStorageInfo ds1 = DFSTestUtil.createDatanodeStorageInfo(
"storage1", "1.1.1.1", "rack1", "host1");
DatanodeStorageInfo ds2 = DFSTestUtil.createDatanodeStorageInfo(
"storage2", "2.2.2.2", "rack2", "host2");
DatanodeStorageInfo ds3 = DFSTestUtil.createDatanodeStorageInfo(
"storage3", "3.3.3.3", "rack3", "host3");
DatanodeStorageInfo ds4 = DFSTestUtil.createDatanodeStorageInfo(
"storage4", "4.4.4.4", "rack4", "host4");

// link block with storage
aBlockInfoStriped.addStorage(ds1, aBlock);
aBlockInfoStriped.addStorage(ds2, new Block(blockId + 1, 0, 0));
aBlockInfoStriped.addStorage(ds3, new Block(blockId + 2, 0, 0));
aBlockInfoStriped.addStorage(ds4, new Block(blockId + 3, 0, 0));

addEcBlockToBM(blockId, ecPolicy);
aBlockInfoStriped.setBlockCollectionId(mockINodeId);

// reconstruction should be scheduled
BlockReconstructionWork work = bm.scheduleReconstruction(aBlockInfoStriped, 3);
assertNotNull(work);

// simulate the 3 nodes reach maxReplicationStreams
for(int i = 0; i < bm.maxReplicationStreams; i++){
ds3.getDatanodeDescriptor().incrementPendingReplicationWithoutTargets();
ds4.getDatanodeDescriptor().incrementPendingReplicationWithoutTargets();
}

// reconstruction should be skipped since the number of non-busy nodes are not enough
work = bm.scheduleReconstruction(aBlockInfoStriped, 3);
assertNull(work);
}

@Test
public void testFavorDecomUntilHardLimit() throws Exception {
bm.maxReplicationStreams = 0;
Expand Down