Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-10453

ReplicationMonitor thread could stuck for long time due to the race between replication and delete of same file in a large cluster.

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.4.1, 2.5.2, 2.7.1, 2.6.4
    • 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.3
    • namenode
    • None
    • Reviewed

    Description

      ReplicationMonitor thread could stuck for long time and loss data with little probability. Consider the typical scenarioļ¼š
      (1) create and close a file with the default replicas(3);
      (2) increase replication (to 10) of the file.
      (3) delete the file while ReplicationMonitor is scheduling blocks belong to that file for replications.

      if ReplicationMonitor stuck reappeared, NameNode will print log as:

      2016-04-19 10:20:48,083 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 7 to reach 10 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
      ......
      2016-04-19 10:21:17,184 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 7 to reach 10 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
      2016-04-19 10:21:17,184 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 7 but only 0 storage types can be selected (replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, DISK, DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
      2016-04-19 10:21:17,184 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 7 to reach 10 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable:  unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
      

      This is because 2 threads (#NameNodeRpcServer and #ReplicationMonitor) process same block at the same moment.
      (1) ReplicationMonitor#computeReplicationWorkForBlocks get blocks to replicate and leave the global lock.
      (2) FSNamesystem#delete invoked to delete blocks then clear the reference in blocksmap, needReplications, etc. the block's NumBytes will set NO_ACK(Long.MAX_VALUE) which is used to indicate that the block deletion does not need explicit ACK from the node.
      (3) ReplicationMonitor#computeReplicationWorkForBlocks continue to chooseTargets for the same blocks and no node will be selected after traverse whole cluster because no node choice satisfy the goodness criteria (remaining spaces achieve required size Long.MAX_VALUE).

      During of stage#3 ReplicationMonitor stuck for long time, especial in a large cluster. invalidateBlocks & neededReplications continues to grow and no consumes. it will loss data at the worst.

      This can mostly be avoided by skip chooseTarget for BlockCommand.NO_ACK block and remove it from neededReplications.

      Attachments

        1. HDFS-10453.001.patch
          5 kB
          Xiaoqiao He
        2. HDFS-10453-branch-2.001.patch
          5 kB
          Xiaoqiao He
        3. HDFS-10453-branch-2.003.patch
          5 kB
          Xiaoqiao He
        4. HDFS-10453-branch-2.7.004.patch
          5 kB
          Xiaoqiao He
        5. HDFS-10453-branch-2.7.005.patch
          5 kB
          Xiaoqiao He
        6. HDFS-10453-branch-2.7.006.patch
          6 kB
          Xiaoqiao He
        7. HDFS-10453-branch-2.7.007.patch
          7 kB
          Xiaoqiao He
        8. HDFS-10453-branch-2.7.008.patch
          5 kB
          Xiaoqiao He
        9. HDFS-10453-branch-2.7.009.patch
          2 kB
          Xiaoqiao He
        10. HDFS-10453-branch-2.8.001.patch
          5 kB
          Xiaoqiao He
        11. HDFS-10453-branch-2.8.002.patch
          2 kB
          Xiaoqiao He
        12. HDFS-10453-branch-2.9.001.patch
          4 kB
          Xiaoqiao He
        13. HDFS-10453-branch-2.9.002.patch
          2 kB
          Xiaoqiao He
        14. HDFS-10453-branch-3.0.001.patch
          6 kB
          Xiaoqiao He
        15. HDFS-10453-branch-3.0.002.patch
          3 kB
          Xiaoqiao He
        16. HDFS-10453-trunk.001.patch
          6 kB
          Xiaoqiao He
        17. HDFS-10453-trunk.002.patch
          3 kB
          Xiaoqiao He

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            hexiaoqiao Xiaoqiao He
            hexiaoqiao Xiaoqiao He
            Votes:
            2 Vote for this issue
            Watchers:
            33 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment