Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-7300

The getMaxNodesPerRack() method in BlockPlacementPolicyDefault is flawed

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Critical
    • Resolution: Fixed
    • None
    • 2.6.0
    • None
    • None
    • Reviewed

    Description

      The getMaxNodesPerRack() can produce an undesirable result in some cases.

      • Three replicas on two racks. The max is 3, so everything can go to one rack.
      • Two replicas on two or more racks. The max is 2, both replicas can end up in the same rack.

      BlockManager#isNeededReplication() fixes this after block/file is closed because blockHasEnoughRacks() will return fail. This is not only extra work, but also can break the favored nodes feature.

      When there are two racks and two favored nodes are specified in the same rack, NN may allocate the third replica on a node in the same rack, because maxNodesPerRack is 3. When closing the file, NN moves a block to the other rack. There is 66% chance that a favored node is moved. If maxNodesPerRack was 2, this would not happen.

      Attachments

        1. HDFS-7300.patch
          4 kB
          Kihwal Lee
        2. HDFS-7300.v2.patch
          8 kB
          Kihwal Lee

        Activity

          People

            kihwal Kihwal Lee
            kihwal Kihwal Lee
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: