Spark Streaming has trouble dealing with situations where
batch processing time > batch interval
Meaning a high throughput of input data w.r.t. Spark's ability to remove data from the queue.
If this throughput is sustained for long enough, it leads to an unstable situation where the memory of the Receiver's Executor is overflowed.
This aims at transmitting a back-pressure signal back to data ingestion to help with dealing with that high throughput, in a backwards-compatible way.
The original design doc can be found here:
The second design doc, focusing on the first sub-task (without all the background info, and more centered on the implementation) can be found here:
|Provide pluggable Congestion Strategies to deal with Streaming load||In Progress||Unassigned|