Details
Description
ReplicationMonitor thread could stuck for long time and loss data with little probability. Consider the typical scenarioļ¼
(1) create and close a file with the default replicas(3);
(2) increase replication (to 10) of the file.
(3) delete the file while ReplicationMonitor is scheduling blocks belong to that file for replications.
if ReplicationMonitor stuck reappeared, NameNode will print log as:
2016-04-19 10:20:48,083 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 7 to reach 10 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy ...... 2016-04-19 10:21:17,184 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 7 to reach 10 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy 2016-04-19 10:21:17,184 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 7 but only 0 storage types can be selected (replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, DISK, DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2016-04-19 10:21:17,184 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 7 to reach 10 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
This is because 2 threads (#NameNodeRpcServer and #ReplicationMonitor) process same block at the same moment.
(1) ReplicationMonitor#computeReplicationWorkForBlocks get blocks to replicate and leave the global lock.
(2) FSNamesystem#delete invoked to delete blocks then clear the reference in blocksmap, needReplications, etc. the block's NumBytes will set NO_ACK(Long.MAX_VALUE) which is used to indicate that the block deletion does not need explicit ACK from the node.
(3) ReplicationMonitor#computeReplicationWorkForBlocks continue to chooseTargets for the same blocks and no node will be selected after traverse whole cluster because no node choice satisfy the goodness criteria (remaining spaces achieve required size Long.MAX_VALUE).
During of stage#3 ReplicationMonitor stuck for long time, especial in a large cluster. invalidateBlocks & neededReplications continues to grow and no consumes. it will loss data at the worst.
This can mostly be avoided by skip chooseTarget for BlockCommand.NO_ACK block and remove it from neededReplications.
Attachments
Attachments
Issue Links
- is duplicated by
-
HDFS-8718 Block replicating cannot work after upgrading to 2.7
- Resolved
- relates to
-
HDFS-13638 DataNode Can't replicate block because NameNode thinks the length is 9223372036854775807
- Resolved
-
HDFS-14720 DataNode shouldn't report block as bad block if the block length is Long.MAX_VALUE.
- Resolved