Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
0.9.4
-
None
-
None
Description
The following patch makes some necessary changes to support Spark 1.1.0. The Spark evaluator now uses RDD.mapPartitionsWithIndex to extract RDD elements lazily (as needed) instead of using the internal Spark method RDD.iterator, which needs access to TaskContext. It also changes the JavaSparkContext to read the Spark config parameters.