Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-10857

Rolling upgrade can make data unavailable when the cluster has many failed volumes

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Patch Available
    • Critical
    • Resolution: Unresolved
    • 2.6.4
    • None
    • None
    • None

    Description

      When the marker file or trash dir is created or removed during the heartbeat response processing, an IOException is thrown if tried on a failed volume. This stops processing of the rest of storage directories and any DNA commands that were part of the heartbeat response.

      While this is happening, the block token key update does not happen and all read and write requests start to fail, until the upgrade is finalized and the DN receives a new key. All it takes is one failed volume. If there are three such nodes in the cluster, it is very likely that some blocks cannot be read. The NN has no idea unlike the common missing blocks scenarios, although the effect is the same.

      Attachments

        1. HDFS-10857.branch-2.6.patch
          7 kB
          Kihwal Lee

        Activity

          People

            Unassigned Unassigned
            kihwal Kihwal Lee
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated: