Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-14637

Namenode may not replicate blocks to meet the policy after enabling upgradeDomain



    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.3.0
    • 3.3.0, 3.1.4, 3.2.2
    • namenode
    • None


      After changing the network topology or placement policy on a cluster and restarting the namenode, the namenode will scan all blocks on the cluster at startup, and check if they meet the current placement policy. If they do not, they are added to the replication queue and the namenode will arrange for them to be replicated to ensure the placement policy is used.

      If you start with a cluster with no UpgradeDomain, and then enable UpgradeDomain, then on restart the NN does notice all the blocks violate the placement policy and it adds them to the replication queue. I believe there are some issues in the logic that prevents the blocks from replicating depending on the setup:

      With UD enabled, but no racks configured, and possible on a 2 rack cluster, the queued replication work never makes any progress, as in blockManager.validateReconstructionWork(), it checks to see if the new replica increases the number of racks, and if it does not, it skips it and tries again later.

      DatanodeStorageInfo[] targets = rw.getTargets();
      if ((numReplicas.liveReplicas() >= requiredRedundancy) &&
          (!isPlacementPolicySatisfied(block)) ) {
        if (!isInNewRack(rw.getSrcNodes(), targets[0].getDatanodeDescriptor())) {
          // No use continuing, unless a new rack in this case
          return false;
        // mark that the reconstruction work is to replicate internal block to a
        // new rack.

      Additionally, in blockManager.scheduleReconstruction() is there some logic that sets the number of new replicas required to one, if the live replicas >= requiredReduncancy:

      int additionalReplRequired;
      if (numReplicas.liveReplicas() < requiredRedundancy) {
        additionalReplRequired = requiredRedundancy - numReplicas.liveReplicas()
            - pendingNum;
      } else {
        additionalReplRequired = 1; // Needed on a new rack

      With UD, it is possible for 2 new replicas to be needed to meet the block placement policy, if all existing replicas are on nodes with the same domain. For traditional '2 rack redundancy', only 1 new replica would ever have been needed in this scenario.


        1. HDFS-14637.branch-3.2.patch
          28 kB
          Wei-Chiu Chuang
        2. HDFS-14637.branch-3.1.patch
          28 kB
          Wei-Chiu Chuang
        3. HDFS-14637.005.patch
          28 kB
          Stephen O'Donnell
        4. HDFS-14637.004.patch
          28 kB
          Stephen O'Donnell
        5. HDFS-14637.003.patch
          28 kB
          Stephen O'Donnell
        6. HDFS-14637.002.patch
          27 kB
          Stephen O'Donnell
        7. HDFS-14637.001.patch
          17 kB
          Stephen O'Donnell

        Issue Links



              sodonnell Stephen O'Donnell
              sodonnell Stephen O'Donnell
              0 Vote for this issue
              8 Start watching this issue