Currently SparkClient shuffles data by calling paritionByKey(). This transformation outputs <key, value> tuples. However, Hive's ExecMapper expects <key, iterator<value>> tuples, and Spark's groupByKey() seems outputing this directly. Thus, using groupByKey, we may be able to avoid its own key clustering mechanism (in HiveReduceFunction). This research is to have a try.
|Assignee||Chao [ csun ]|
|Attachment||HIVE-7546.5-spark.patch [ 12658864 ]|
|Status||Open [ 1 ]||Patch Available [ 10002 ]|
|Status||Patch Available [ 10002 ]||Resolved [ 5 ]|
|Fix Version/s||spark-branch [ 12327352 ]|
|Resolution||Fixed [ 1 ]|