Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
0.1.0
-
None
-
None
Description
Dfs needs a validation operation similiar to fsck, so that we get to know the files that are corrupted and which data blocks are missing.
Dfs namenode also needs to log more specific information such as which block is replication or is deleted. So when something goes wrong, we have a clue what has happened.
Attachments
Attachments
Issue Links
- is related to
-
HADOOP-500 Datanode should scan blocks continuously to detect bad blocks / CRC errors
- Closed