Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-8031 Follow-on work for erasure coding phase I (striping layout)
  3. HDFS-10968

BlockManager#isInNewRack should consider decommissioning nodes

VotersWatch issueWatchersLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments


    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0-alpha1
    • Fix Version/s: 3.0.0-alpha2
    • Component/s: erasure-coding, namenode
    • Labels:


      For an EC block, it is possible we have enough internal blocks but without enough racks. The current reconstruction code calls BlockManager#isInNewRack to check if the target node can increase the total rack number for the case, which compares the target node's rack with source node racks:

          for (DatanodeDescriptor src : srcs) {
            if (src.getNetworkLocation().equals(target.getNetworkLocation())) {
              return false;

      However here the srcs may include a decommissioning node, in which case we should allow the target node to be in the same rack with it.

      For e.g., suppose we have 11 nodes: h1 ~ h11, which are located in racks r1, r1, r2, r2, r3, r3, r4, r4, r5, r5, r6, respectively. In case that an EC block has 9 live internal blocks on (h1~h8 + h11), and one internal block on h9 which is to be decommissioned. The current code will not choose h10 for reconstruction because isInNewRack thinks h10 is on the same rack with h9.


        1. HDFS-10968.000.patch
          10 kB
          Jing Zhao



            • Assignee:
              jingzhao Jing Zhao
              jingzhao Jing Zhao


              • Created:

                Issue deployment