Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
1.3.0
-
None
Description
The bypassMergeThreshold parameter (and associated use of a hash-ish shuffle when the number of partitions is less than this) is basically a workaround for SparkSQL, because the fact that the sort-based shuffle stores non-serialized objects is a deal-breaker for SparkSQL, which re-uses objects. Once the sort-based shuffle is changed to store serialized objects, we should never be secretly doing hash-ish shuffle even when the user has specified to use sort-based shuffle (because of its otherwise worse performance).
rxinadav, masters of shuffle, it would be helpful to get agreement from you on this proposal (and also a sanity check that I've correctly characterized the issue).
Attachments
Issue Links
- is blocked by
-
SPARK-4550 In sort-based shuffle, store map outputs in serialized form
- Resolved
-
SPARK-7855 Move hash-style shuffle code out of ExternalSorter and into own file
- Resolved