Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
1.1.0
-
None
Description
Currently Spark has already integrated sort-based shuffle write, which greatly improve the IO performance and reduce the memory consumption when reducer number is very large. But for the reducer side, it still adopts the implementation of hash-based shuffle reader, which neglects the ordering attributes of map output data in some situations.
Here we propose a MR style sort-merge like shuffle reader for sort-based shuffle to better improve the performance of sort-based shuffle.
Working in progress code and performance test report will be posted later when some unit test bugs are fixed.
Any comments would be greatly appreciated.
Thanks a lot.
Attachments
Attachments
Issue Links
- blocks
-
SPARK-2213 Sort Merge Join
- Resolved
-
SPARK-3056 Sort-based Aggregation
- Resolved
- is duplicated by
-
SPARK-2114 groupByKey and joins on raw data
- Resolved
- relates to
-
SPARK-2978 Provide an MR-style shuffle transformation
- Resolved
- links to