Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-3553

Spark Streaming app streams files that have already been streamed in an endless loop

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • 1.0.1
    • None
    • DStreams
    • Ec2 cluster - YARN

    Description

      We have a spark streaming app deployed in a YARN ec2 cluster with 1 name node and 2 data nodes. We submit the app with 11 executors with 1 core and 588 MB of RAM each.
      The app streams from a directory in S3 which is constantly being written; this is the line of code that achieves that:

      val lines = ssc.fileStream[LongWritable, Text, TextInputFormat](Settings.S3RequestsHost , (f:Path)=> true, true )

      The purpose of using fileStream instead of textFileStream is to customize the way that spark handles existing files when the process starts. We want to process just the new files that are added after the process launched and omit the existing ones. We configured a batch duration of 10 seconds.
      The process goes fine while we add a small number of files to s3, let's say 4 or 5. We can see in the streaming UI how the stages are executed successfully in the executors, one for each file that is processed. But when we try to add a larger number of files, we face a strange behavior; the application starts streaming files that have already been streamed.
      For example, I add 20 files to s3. The files are processed in 3 batches. The first batch processes 7 files, the second 8 and the third 5. No more files are added to S3 at this point, but spark start repeating these phases endlessly with the same files.
      Any thoughts what can be causing this?
      Regards,
      Easyb

      Attachments

        Activity

          People

            Unassigned Unassigned
            easyB Ezequiel Bella
            Votes:
            1 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: