Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-11609

Some blocks can be permanently lost if nodes are decommissioned while dead

    XMLWordPrintableJSON

Details

    • Reviewed

    Description

      When all the nodes containing a replica of a block are decommissioned while they are dead, they get decommissioned right away even if there are missing blocks. This behavior was introduced by HDFS-7374.

      The problem starts when those decommissioned nodes are brought back online. The namenode no longer shows missing blocks, which creates a false sense of cluster health. When the decommissioned nodes are removed and reformatted, the block data is permanently lost. The namenode will report missing blocks after the heartbeat recheck interval (e.g. 10 minutes) from the moment the last node is taken down.

      There are multiple issues in the code. As some cause different behaviors in testing vs. production, it took a while to reproduce it in a unit test. I will present analysis and proposal soon.

      Attachments

        1. HDFS-11609_v3.trunk.patch
          11 kB
          Kihwal Lee
        2. HDFS-11609_v3.branch-2.patch
          10 kB
          Kihwal Lee
        3. HDFS-11609_v3.branch-2.7.patch
          10 kB
          Kihwal Lee
        4. HDFS-11609_v2.trunk.patch
          10 kB
          Kihwal Lee
        5. HDFS-11609_v2.branch-2.patch
          10 kB
          Kihwal Lee
        6. HDFS-11609.branch-2.patch
          9 kB
          Kihwal Lee
        7. HDFS-11609.trunk.patch
          10 kB
          Kihwal Lee

        Issue Links

          Activity

            People

              kihwal Kihwal Lee
              kihwal Kihwal Lee
              Votes:
              0 Vote for this issue
              Watchers:
              18 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: