Description
It appears that all replicas of a block can end up in the same rack. The likelihood of such replicas seems to be directly related to decommissioning of nodes.
Post rolling OS upgrade (decommission 3-10% of nodes, re-install etc, add them back) of a running cluster, all replicas of about 0.16% of blocks ended up in the same rack.
Hadoop Namenode UI etc doesn't seem to know about such incorrectly replicated blocks. "hadoop fsck .." does report that the blocks must be replicated on additional racks.
Looking at ReplicationTargetChooser.java, following seem suspect:
snippet-01:
int maxNodesPerRack =
(totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2;
snippet-02:
case 2: if (clusterMap.isOnSameRack(results.get(0), results.get(1))) { chooseRemoteRack(1, results.get(0), excludedNodes, blocksize, maxNodesPerRack, results); } else if (newBlock){ chooseLocalRack(results.get(1), excludedNodes, blocksize, maxNodesPerRack, results); } else { chooseLocalRack(writer, excludedNodes, blocksize, maxNodesPerRack, results); } if (--numOfReplicas == 0) { break; }
snippet-03:
do { DatanodeDescriptor[] selectedNodes = chooseRandom(1, nodes, excludedNodes); if (selectedNodes.length == 0) { throw new NotEnoughReplicasException( "Not able to place enough replicas"); } result = (DatanodeDescriptor)(selectedNodes[0]); } while(!isGoodTarget(result, blocksize, maxNodesPerRack, results));
Attachments
Attachments
Issue Links
- relates to
-
HDFS-15 Rack replication policy can be violated for over replicated blocks
- Closed