Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
-
None
Description
If the block length does not match the one in the blockMap, we should mark the block as corrupted. This could help clearing the polluted replicas caused by HADOOP-4810 and also help detect the on-disk block gets truncated/enlarged manually by accident.
Attachments
Issue Links
- is duplicated by
-
HDFS-2251 Namenode does not recognize incorrectly sized blocks
- Open
- is related to
-
HADOOP-4810 Data lost at cluster startup time
- Closed