Details
Description
When directoryScanner have the results of differences between disk and in-memory blocks. it will try to run checkAndUpdate to fix it. However FsDatasetImpl.checkAndUpdate is a synchronized call
As I have about 6millions blocks for every datanodes and every 6hours' scan will have about 25000 abnormal blocks to fix. That leads to a long lock holding FsDatasetImpl object.
let's assume every block need 10ms to fix(because of latency of SAS disk), that will cost 250 seconds to finish. That means all reads and writes will be blocked for 3mins for that datanode.
2019-05-06 08:06:51,704 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-1644920766-10.223.143.220-1450099987967 Total blocks: 6850197, missing metadata files:23574, missing block files:23574, missing blocks in memory:47625, mismatched blocks:0 ... 2019-05-06 08:16:41,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Took 588402ms to process 1 commands from NN
Take long time to process command from nn because threads are blocked. And namenode will see long lastContact time for this datanode.
Maybe this affect all hdfs versions.
how to fix:
just like process invalidate command from namenode with 1000 batch size, fix these abnormal block should be handled with batch too and sleep 2 seconds between the batch to allow normal reading/writing blocks.
Attachments
Attachments
Issue Links
- breaks
-
HDFS-14751 Synchronize on diffs in DirectoryScanner
- Resolved
- causes
-
HDFS-15048 Fix findbug in DirectoryScanner
- Resolved
- is duplicated by
-
HDFS-14126 DataNode DirectoryScanner holding global lock for too long
- Resolved
- is related to
-
HDFS-14126 DataNode DirectoryScanner holding global lock for too long
- Resolved