Affects Version/s: 2.4.0
Fix Version/s: None
Component/s: Spark Core
SubtractedRDD, which is used to implement RDD.subtract() and PairRDDFunctions.subtractByKey(), currently buffers one partition in memory and does not support spilling: https://github.com/apache/spark/blob/v2.4.3/core/src/main/scala/org/apache/spark/rdd/SubtractedRDD.scala#L42
In principle, we could implement subtractByKey as a left-outer join followed by a filter (e.g. as an antijoin), but the Scaladoc explains why this approach wasn't taken:
For example, if we have left.subtractByKey(right) and right has hundreds of occurrences of a key then we'd end up buffering hundreds of tuples.
Instead, maybe we could implement a sort-merge join where we build an ExternalAppendOnlyMap of unique right keys, use an ExternalSorter to sort the left| input, then iterate over both sorted iterators and perform a merge.
Note that this problem only impacts the RDD API.
Here are some existing workarounds for this OOM-proneness:
- Use more partitions: e.g. left.subtractByKey(right, 2000) (or pass in a custom partitioner). This may not help if you have heavily skewed keys, though.
- Use a left join followed by filter:
If you wanted to further optimize, you could replace right values with dummy placeholders to avoid having to shuffle them:
- Use DataFrames / Datasets instead of RDDs.