Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12070

Failed block recovery leaves files open indefinitely and at risk for data loss

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.0.0-alpha
    • 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.3
    • None
    • None
    • Reviewed

    Description

      Files will remain open indefinitely if block recovery fails which creates a high risk of data loss. The replication monitor will not replicate these blocks.

      The NN provides the primary node a list of candidate nodes for recovery which involves a 2-stage process. The primary node removes any candidates that cannot init replica recovery (essentially alive and knows about the block) to create a sync list. Stage 2 issues updates to the sync list – but fails if any node fails unlike the first stage. The NN should be informed of nodes that did succeed.

      Manual recovery will also fail until the problematic node is temporarily stopped so a connection refused will induce the bad node to be pruned from the candidates. Recovery succeeds, the lease is released, under replication is fixed, and block is invalidated from the bad node.

      Attachments

        1. HDFS-12070.0.patch
          4 kB
          Kihwal Lee
        2. HDFS-12070.1.patch
          3 kB
          Kihwal Lee
        3. lease.patch
          2 kB
          Kihwal Lee

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            kihwal Kihwal Lee
            daryn Daryn Sharp
            Votes:
            0 Vote for this issue
            Watchers:
            14 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment