Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-4910

NameNode should exclude corrupt replicas when choosing excessive replicas to delete

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersCreate sub-taskConvert to sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Blocker
    • Resolution: Fixed
    • 0.17.0
    • 0.18.3
    • None
    • None
    • Reviewed

    Description

      Currently, when NameNode handles an over-replicated block in FSNamesystem#processOverReplicatedBlock, it excludes ones already in excessReplicateMap and decommissed ones, but it treats a corrupt replica as a valid one. This may lead to unnecessary deletion of more replicas and thus cause data lose. It should exclude corrupt replicas as well.

      Attachments

        1. overReplicated.patch
          1 kB
          Hairong Kuang
        2. overReplicated1.patch
          6 kB
          Hairong Kuang
        3. overReplicated2.patch
          6 kB
          Hairong Kuang
        4. overReplicated2-br18.patch
          6 kB
          Hairong Kuang

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            hairong Hairong Kuang Assign to me
            hairong Hairong Kuang
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment