Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-5142

Possibly data may be ruined in Spark Streaming's WAL mechanism.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • 1.2.0
    • None
    • DStreams

    Description

      Currently in Spark Streaming's WAL manager, data will be written into HDFS with multiple tries when meeting failure, because of lacking of transactional guarantee, previously partial-written data is not rolled back and the retried data will be appended to the last, this will ruin the file and make the WriteAheadLogReader to read data with failure.

      Firstly I think this problem is hard to fix because HDFS do not support truncate operation(HDFS-3107) or random write with specific offset.

      Secondly, I think if we meet such write exception, it is better not to try again, try again will ruin the file and make read abnormal.

      Sorry if I misunderstand anything.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              jerryshao Saisai Shao
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: