Hadoop Common
  1. Hadoop Common
  2. HADOOP-1135

A block report processing may incorrectly cause the namenode to delete blocks

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.12.2
    • Component/s: None
    • Labels:
      None

      Description

      When a block report arrives at the namenode, the namenode goes through all the blocks on that datanode. If a block is not valid it is marked for deletion. The blocks-to-be-deleted are sent to the datanode as a response to the next heartbeat RPC. The namenode sends only 100 blocks-to-be-deleted at a time. This was introduced as part of hadoop-994. The bug is that if the number of blocks-to-be-deleted exceeds 100, then that namenode marks all the remaining blocks in the block report for deletion.

        Activity

        No work has yet been logged on this issue.

          People

          • Assignee:
            dhruba borthakur
            Reporter:
            dhruba borthakur
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development