Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-18682

Batch Source for Kafka

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.1.1, 2.2.0
    • SQL, Structured Streaming
    • None

    Description

      Today, you can start a stream that reads from kafka. However, given kafka's configurable retention period, it seems like sometimes you might just want to read all of the data that is available now. As such we should add a version that works with spark.read as well.

      The options should be the same as the streaming kafka source, with the following differences:

      • startingOffsets should default to earliest, and should not allow latest (which would always be empty).
      • endingOffsets should also be allowed and should default to latest. the same assign json format as startingOffsets should also be accepted.

      It would be really good, if things like .limit(n) were enough to prevent all the data from being read (this might just work).

      Attachments

        Issue Links

          Activity

            People

              tcondie Tyson Condie
              marmbrus Michael Armbrust
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: