Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-1135

A block report processing may incorrectly cause the namenode to delete blocks

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.12.2
    • None
    • None

    Description

      When a block report arrives at the namenode, the namenode goes through all the blocks on that datanode. If a block is not valid it is marked for deletion. The blocks-to-be-deleted are sent to the datanode as a response to the next heartbeat RPC. The namenode sends only 100 blocks-to-be-deleted at a time. This was introduced as part of hadoop-994. The bug is that if the number of blocks-to-be-deleted exceeds 100, then that namenode marks all the remaining blocks in the block report for deletion.

      Attachments

        1. blockReportInvalidateBlock2.patch
          1 kB
          Dhruba Borthakur

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            dhruba Dhruba Borthakur
            dhruba Dhruba Borthakur
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment