Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-32962

Spark Streaming

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Trivial
    • Resolution: Invalid
    • Affects Version/s: 2.4.5
    • Fix Version/s: None
    • Component/s: DStreams
    • Labels:
      None

      Description

      Hey there,

      I'm using this spark streaming job which integrated with Kafka (and manage its offsets commitions at Kafka itself),

      The problem is when I have a failure I want to repeat the work on  those offset ranges (that something went wrong with them) , therefore I catch the exception and NOT commit (with commitAsync) this range.

      However I notice it keeps proceeding (without any commit made).

      moreover I removed later all the commitAsync calls and I the stream keep proceeding!

      I guess there might be any inner cache or something that helps the streaming job to consume the entries from Kafka.

       

      Could you please advice?

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              amit.menashe Amit Menashe
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: