Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-7240 Scaling HDFS
  3. HDFS-13132

Ozone: Handle datanode failures in Storage Container Manager

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Patch Available
    • Major
    • Resolution: Unresolved
    • HDFS-7240
    • HDFS-7240
    • ozone
    • None

    Description

      Currently SCM receives heartbeat from the datanodes in the cluster receiving container reports. Apart from this Ratis leader also receives the heartbeats from the nodes in a Raft ring. The ratis heartbeats are at a smaller interval (500 ms) whereas SCM heartbeats are at (30s), it is thereby considered safe to assume that a datanode is really lost when SCM missed heartbeat from such a node.

      The pipeline recovery will follow the following steps

      1) As noted earlier, SCM will identify a dead DN via the heartbeats. Current stale interval is (1.5m). Once a stale node has been identified, SCM will find the list of containers for the pipelines the datanode was part of.

      2) SCM sends close container command to the datanodes, note that at this time, the Ratis ring has 2 nodes in the ring and consistency can still be guaranteed by Ratis.

      3) If another node dies before the close container command succeeded, then ratis cannot guarantee consistency of the data being written/ close container. The pipeline here will be marked in a inconsistent state.

      4) Closed container will be replicated via the close container replication protocol.
      If the dead datanode comes back, as part of the re-register command, SCM will ask the Datanode to format all the open containers.

      5) Return the healthy nodes back to the free node pool for the next pipeline allocation

      6) Read operation to close containers will succeed however read operation to a open container on a single node cluster will be disallowed. It will only be allowed under a special flag aka ReadInconsistentData flag.

      This jira will introduce the mechanism to identify and handle datanode failure.
      However handling of a) 2 nodes simultaneously and b) Return the nodes to healthy state c) allow inconsistent data reads and d) purging of open container on a zombie node will be done as part of separate bugs.

      Attachments

        1. HDFS-13132-HDFS-7240.001.patch
          55 kB
          Mukul Kumar Singh
        2. HDFS-13132-HDFS-7240.002.patch
          56 kB
          Mukul Kumar Singh

        Issue Links

          Activity

            People

              shashikant Shashikant Banerjee
              msingh Mukul Kumar Singh
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated: