Details
-
Improvement
-
Status: Open
-
P3
-
Resolution: Unresolved
-
None
-
None
-
None
Description
When you construct a collection first and convert it to an iterator you force Spark to evaluate the entire input partition before it can get the first element off the output. This breaks some of the spilling to disk Spark can do otherwise. Instead chain operations on Iterators.
This is only possible in the Java API for Spark 2 and above (and that's my fault from back in my work in the Spark project).