Description
I spent a good chunk of the week working with the SparkPipeline implementation, and I came up with a few improvements to the flow that controls how the JavaSparkContext gets created when we're not given one directly.
The main idea is to re-use the Pipeline's Configuration object to specify any of the critical spark-specific configuration options (e.g., memory allocations) in a SparkConf instance before constructing the JavaSparkContext.