Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-17267 Long running structured streaming requirements
  3. SPARK-16963

Change Source API so that sources do not need to keep unbounded state



    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.0.0, 2.0.1
    • 2.0.2, 2.1.0
    • Structured Streaming
    • None


      The version of the Source API in Spark 2.0.0 defines a single getBatch() method for fetching records from the source, with the following Scaladoc comments defining the semantics:

       * Returns the data that is between the offsets (`start`, `end`]. When `start` is `None` then
       * the batch should begin with the first available record. This method must always return the
       * same data for a particular `start` and `end` pair.
      def getBatch(start: Option[Offset], end: Offset): DataFrame

      These semantics mean that a Source must retain all past history for the stream that it backs. Further, a Source is also required to retain this data across restarts of the process where the Source is instantiated, even when the Source is restarted on a different machine.
      These restrictions make it difficult to implement the Source API, as any implementation requires potentially unbounded amounts of distributed storage.
      See the mailing list thread at http://apache-spark-developers-list.1001551.n3.nabble.com/Source-API-requires-unbounded-distributed-storage-td18551.html for more information.
      This JIRA will cover augmenting the Source API with an additional callback that will allow Structured Streaming scheduler to notify the source when it is safe to discard buffered data.


        Issue Links



              freiss Frederick Reiss
              freiss Frederick Reiss
              Michael Armbrust Michael Armbrust
              0 Vote for this issue
              5 Start watching this issue