Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-3703

Decrease the datanode failure detection time

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 1.0.3, 2.0.0-alpha, 3.0.0-alpha1
    • 1.1.0, 2.0.3-alpha
    • datanode, namenode
    • None
    • Reviewed
    • Hide
      This jira adds a new DataNode state called "stale" at the NameNode. DataNodes are marked as stale if it does not send heartbeat message to NameNode within the timeout configured using the configuration parameter "dfs.namenode.stale.datanode.interval" in seconds (default value is 30 seconds). NameNode picks a stale datanode as the last target to read from when returning block locations for reads.

      This feature is by default turned * off *. To turn on the feature, set the HDFS configuration "dfs.namenode.check.stale.datanode" to true.
      Show
      This jira adds a new DataNode state called "stale" at the NameNode. DataNodes are marked as stale if it does not send heartbeat message to NameNode within the timeout configured using the configuration parameter "dfs.namenode.stale.datanode.interval" in seconds (default value is 30 seconds). NameNode picks a stale datanode as the last target to read from when returning block locations for reads. This feature is by default turned * off *. To turn on the feature, set the HDFS configuration "dfs.namenode.check.stale.datanode" to true.

    Description

      By default, if a box dies, the datanode will be marked as dead by the namenode after 10:30 minutes. In the meantime, this datanode will still be proposed by the nanenode to write blocks or to read replicas. It happens as well if the datanode crashes: there is no shutdown hooks to tell the nanemode we're not there anymore.
      It especially an issue with HBase. HBase regionserver timeout for production is often 30s. So with these configs, when a box dies HBase starts to recover after 30s and, while 10 minutes, the namenode will consider the blocks on the same box as available. Beyond the write errors, this will trigger a lot of missed reads:

      • during the recovery, HBase needs to read the blocks used on the dead box (the ones in the 'HBase Write-Ahead-Log')
      • after the recovery, reading these data blocks (the 'HBase region') will fail 33% of the time with the default number of replica, slowering the data access, especially when the errors are socket timeout (i.e. around 60s most of the time).

      Globally, it would be ideal if HDFS settings could be under HBase settings.
      As a side note, HBase relies on ZooKeeper to detect regionservers issues.

      Attachments

        1. HDFS-3703.patch
          16 kB
          Jing Zhao
        2. HDFS-3703-branch2.patch
          18 kB
          Nicolas Liochon
        3. HDFS-3703-trunk-with-write.patch
          18 kB
          Jing Zhao
        4. HDFS-3703-trunk-read-only.patch
          16 kB
          Jing Zhao
        5. HDFS-3703-trunk-read-only.patch
          16 kB
          Jing Zhao
        6. HDFS-3703-trunk-read-only.patch
          18 kB
          Jing Zhao
        7. HDFS-3703-trunk-read-only.patch
          18 kB
          Jing Zhao
        8. HDFS-3703-trunk-read-only.patch
          18 kB
          Jing Zhao
        9. HDFS-3703-trunk-read-only.patch
          22 kB
          Jing Zhao
        10. 3703-hadoop-1.0.txt
          15 kB
          Ted Yu
        11. HDFS-3703-trunk-read-only.patch
          22 kB
          Jing Zhao
        12. HDFS-3703-branch-1.1-read-only.patch
          13 kB
          Jing Zhao
        13. HDFS-3703-branch-1.1-read-only.patch
          13 kB
          Jing Zhao

        Issue Links

          Activity

            People

              jingzhao Jing Zhao
              nkeywal Nicolas Liochon
              Votes:
              0 Vote for this issue
              Watchers:
              29 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: