Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
0.12.2
-
None
-
None
Description
HADOOP-1170 removed the disk checking feature. But this is a needed feature for maintaining a large cluster. I agree that checking the disk on every I/O is too costly. A nicer approach is to have a thread that periodically do a disk check. It then automatically decommissions itself when any error occurs.
Attachments
Attachments
Issue Links
- is related to
-
HADOOP-163 If a DFS datanode cannot write onto its file system. it should tell the name node not to assign new blocks to it.
- Closed
-
HADOOP-1170 Very high CPU usage on data nodes because of FSDataset.checkDataDir() on every connect
- Closed