Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-3141

sortByKey() break take()

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 1.1.0
    • Fix Version/s: 1.1.0
    • Component/s: PySpark
    • Labels:
      None
    • Target Version/s:

      Description

      https://github.com/apache/spark/pull/1898/files#r16449470

      I think there might be two unintended side effects of this change. This code used to work in pyspark:

      sc.parallelize([5,3,4,2,1]).map(lambda x: (x,x)).sortByKey().take(1)
      Now it failswith the error:

      File "<...>/spark/python/pyspark/rdd.py", line 1023, in takeUpToNumLeft
      yield next(iterator)
      TypeError: list object is not an iterator
      Changing mapFunc and sort back to generators rather than regular functions fixes that problem.

      After making that change, there is a second side effect due to the removal of flatMap where the above code returns the following unexpected result due to the default partitioning scheme:

      [[(1, 1), (2, 2)]]
      Removing sortByKey, e.g.:

      sc.parallelize([5,3,4,2,1]).map(lambda x: (x,x)).take(1)
      returns the expected result [(5, 5)]. Restoring the call to flatMap resolves this as well.

        Attachments

          Activity

            People

            • Assignee:
              davies Davies Liu
              Reporter:
              davies Davies Liu
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Time Tracking

                Estimated:
                Original Estimate - 2h
                2h
                Remaining:
                Remaining Estimate - 2h
                2h
                Logged:
                Time Spent - Not Specified
                Not Specified