Details
-
Test
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
This is a map-reduce based test that checks consistency of the file system
by reading all blocks of all files, and detecting which of them are missing or corrupted.
See HADOOP-95 and HADOOP-101 for related discussions.
This could be an alternative to the sequential checkup in dfsck.
It would be nice to integrate distributed checkup with dfsck, but I don't yet see how.
This test reuses classes defined in HADOOP-193.