Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-37

An invalidated block should be removed from the blockMap

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Not A Problem
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      Currently when a namenode schedules to delete an over-replicated block, the replica to be deleted does not get removed the block map immediately. Instead it gets removed when the next block report to comes in. This causes three problems:
      1. getBlockLocations may return locations that do not contain the block;
      2. Over-replication due to unsuccessful deletion can not be detected as described in HADOOP-4477.
      3. The number of blocks shown on dfs Web UI does not get updated on a source node when a large number of blocks have been moved from the source node to a target node, for example, when running a balancer.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                hairong Hairong Kuang
                Reporter:
                hairong Hairong Kuang
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: