Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
1.2.0
-
None
Description
Currently in Spark Streaming's WAL manager, data will be written into HDFS with multiple tries when meeting failure, because of lacking of transactional guarantee, previously partial-written data is not rolled back and the retried data will be appended to the last, this will ruin the file and make the WriteAheadLogReader to read data with failure.
Firstly I think this problem is hard to fix because HDFS do not support truncate operation(HDFS-3107) or random write with specific offset.
Secondly, I think if we meet such write exception, it is better not to try again, try again will ruin the file and make read abnormal.
Sorry if I misunderstand anything.
Attachments
Issue Links
- relates to
-
SPARK-6222 [STREAMING] All data may not be recovered from WAL when driver is killed
- Resolved