Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-11419

BlockPlacementPolicyDefault is choosing datanode in an inefficient way



    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Done
    • None
    • None
    • namenode
    • None


      Currently in BlockPlacementPolicyDefault, chooseTarget will end up calling into chooseRandom, which will first find a random datanode by calling

      DatanodeDescriptor chosenNode = chooseDataNode(scope, excludedNodes);

      , then it checks whether that returned datanode satisfies storage type requirement

      storage = chooseStorage4Block(
                    chosenNode, blocksize, results, entry.getKey());

      If yes, numOfReplicas--;, otherwise, the node is added to excluded nodes, and runs the loop again until numOfReplicas is down to 0.

      A problem here is that, storage type is not being considered only until after a random node is already returned. We've seen a case where a cluster has a large number of datanodes, while only a few satisfy the storage type condition. So, for the most part, this code blindly picks random datanodes that do not satisfy the storage type requirement.

      To make matters worse, the way NetworkTopology#chooseRandom works is that, given a set of excluded nodes, it first finds a random datanodes, then if it is in excluded nodes set, try find another random nodes. So the more excluded nodes there are, the more likely a random node will be in the excluded set, in which case we basically wasted one iteration.

      Therefore, this JIRA proposes to augment/modify the relevant classes in a way that datanodes can be found more efficiently. There are currently two different high level solutions we are considering:

      1. add some field to Node base types to describe the storage type info, and when searching for a node, we take into account such field(s), and do not return node that does not meet the storage type requirement.

      2. change NetworkTopology class to be aware of storage types, e.g. for one storage type, there is one tree subset that connects all the nodes with that type. And one search happens on only one such subset. So unexpected storage types are simply not in the search space.

      Thanks szetszwo for the offline discussion, and thanks linyiqun for pointing out a wrong statement (corrected now) in the description. Any further comments are more than welcome.


        Issue Links



              vagarychen Chen Liang
              vagarychen Chen Liang
              0 Vote for this issue
              33 Start watching this issue