A common pattern in Spark development is to look for opportunities to leverage data locality using mechanisms such as mapPartitions. Often this happens when an existing set of RDD transformations is refactored to improve performance. At that point, significant code refactoring may be required because the input is Iterator[T] as opposed to an RDD. The most common examples we've encountered so far involve the *ByKey methods, sample and takeSample. We have also observed cases where, due to changes in the structure of data use of mapPartitions is no longer possible and the code has to be converted to use the RDD API.
If data manipulation through the RDD API could be applied to the standard Scala data structures then refactoring Spark data pipelines would become faster and less bug-prone. Also, and this is no small benefit, the thoughtfulness and experience of the Spark community could spread to the broader Scala community.
There are multiple approaches to solving this problem, including but not limited to creating a set of Local*RDD classes and/or adding implicit conversions.
Here is a simple example meant to be short as opposed to complete or performance-optimized: