Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-21595

introduction of spark.sql.windowExec.buffer.spill.threshold in spark 2.2 breaks existing workflow

    XMLWordPrintableJSON

Details

    Description

      My pyspark code has the following statement:

      # assign row key for tracking
      df = df.withColumn(
              'association_idx',
              sqlf.row_number().over(
                  Window.orderBy('uid1', 'uid2')
              )
          )
      

      where df is a long, skinny (450M rows, 10 columns) dataframe. So this creates one large window for the whole dataframe to sort over.
      In spark 2.1 this works without problem, in spark 2.2 this fails either with out of memory exception or too many open files exception, depending on memory settings (which is what I tried first to fix this).
      Monitoring the blockmgr, I see that spark 2.1 creates 152 files, spark 2.2 creates >110,000 files.
      In the log I see the following messages (110,000 of these):

      17/08/01 08:55:37 INFO UnsafeExternalSorter: Spilling data because number of spilledRecords crossed the threshold 4096
      17/08/01 08:55:37 INFO UnsafeExternalSorter: Thread 156 spilling sort data of 64.1 MB to disk (0  time so far)
      17/08/01 08:55:37 INFO UnsafeExternalSorter: Spilling data because number of spilledRecords crossed the threshold 4096
      17/08/01 08:55:37 INFO UnsafeExternalSorter: Thread 156 spilling sort data of 64.1 MB to disk (1  time so far)
      

      So I started hunting for clues in UnsafeExternalSorter, without luck. What I had missed was this one message:

      17/08/01 08:55:37 INFO ExternalAppendOnlyUnsafeRowArray: Reached spill threshold of 4096 rows, switching to org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter
      

      Which allowed me to track down the issue.
      By changing the configuration to include:

      spark.sql.windowExec.buffer.spill.threshold	2097152
      

      I got it to work again and with the same performance as spark 2.1.
      I have workflows where I use windowing functions that do not fail, but took a performance hit due to the excessive spilling when using the default of 4096.
      I think to make it easier to track down these issues this config variable should be included in the configuration documentation.
      Maybe 4096 is too small of a default value?

      Attachments

        Activity

          People

            tejasp Tejas Patil
            sreiling Stephan Reiling
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: