Details
-
New Feature
-
Status: Open
-
Major
-
Resolution: Unresolved
-
4.0.0
-
None
Description
In Feature Engineering we need to process the input data to create feature and feature vectors which are required to train the model. For which we need to do multiple spark transformations (etc:map, filter etc) the spark has very good optimization for multiple transformations due to its lazy execution. It combines multiple transformations into fewer transformations which helps to optimize the overall execution time.
I found that we can still improve the execution time in the case of filters.
val rddfilter0 = personRdd.filter(t => t.age>5 && t.age<=12) val rddfilter1 = personRdd.filter(t => t.age>12 && t.age<=18) val rddfilter2 = personRdd.filter(t => t.age>18 && t.age<=25) val rddfilter3 = personRdd.filter(t => t.age>25 && t.age<=35) val rddfilter4 = personRdd.filter(t => t.age>35 && t.age<=65)
Sample Run Results:
Records :50,000,000
5 filter Execution Time: 24854 milli sec
5 filter with Map Execution Time: 5212 milli sec
We can very well improve multiple X times and reduce significant memory footprint for a complex DAG of Spark Transformation.
Sample illustration can be found here
https://docs.google.com/document/d/1gdWR2TwbCfiuRF51EHA1zRnD9ES_neIvIsgEvizrjuo/edit?usp=sharing
Need support of such transformation in Spark Core so that more complex transformation can be supported. Some illustration is provided in above document.