Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
Reviewed
Description
we have seen an instance where a external outage caused many datanodes to reboot at around the same time. This resulted in many corrupted blocks. These were recently written blocks; the current implementation of HDFS Datanodes do not sync the data of a block file when the block is closed.
1. Have a cluster-wide config setting that causes the datanode to sync a block file when a block is finalized.
2. Introduce a new parameter to the FileSystem.create() to trigger the new behaviour, i.e. cause the datanode to sync a block-file when it is finalized.
3. Implement the FSDataOutputStream.hsync() to cause all data written to the specified file to be written to stable storage.
Attachments
Attachments
Issue Links
- relates to
-
HDFS-5042 Completed files lost after power failure
- Resolved