I think there might be two unintended side effects of this change. This code used to work in pyspark:
sc.parallelize([5,3,4,2,1]).map(lambda x: (x,x)).sortByKey().take(1)
Now it failswith the error:
File "<...>/spark/python/pyspark/rdd.py", line 1023, in takeUpToNumLeft
TypeError: list object is not an iterator
Changing mapFunc and sort back to generators rather than regular functions fixes that problem.
After making that change, there is a second side effect due to the removal of flatMap where the above code returns the following unexpected result due to the default partitioning scheme:
[[(1, 1), (2, 2)]]
Removing sortByKey, e.g.:
sc.parallelize([5,3,4,2,1]).map(lambda x: (x,x)).take(1)
returns the expected result [(5, 5)]. Restoring the call to flatMap resolves this as well.