Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-37

An invalidated block should be removed from the blockMap

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Not A Problem
    • None
    • None
    • None
    • None

    Description

      Currently when a namenode schedules to delete an over-replicated block, the replica to be deleted does not get removed the block map immediately. Instead it gets removed when the next block report to comes in. This causes three problems:
      1. getBlockLocations may return locations that do not contain the block;
      2. Over-replication due to unsuccessful deletion can not be detected as described in HADOOP-4477.
      3. The number of blocks shown on dfs Web UI does not get updated on a source node when a large number of blocks have been moved from the source node to a target node, for example, when running a balancer.

      Attachments

        Issue Links

          Activity

            People

              hairong Hairong Kuang
              hairong Hairong Kuang
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: