Current implementation split the operator plan at the lowest common ancester by inserting one FileSinkOperator and a list of TableScanOperators. Writing to a file (by the FS) is expensive. We should be able to insert a ReduceSinkOperator instead. The result RDD from the first job can be cached and refereed in subsequent Spark jobs.
This is a followup for