Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-22584

dataframe write partitionBy out of disk/java heap issues

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Not A Problem
    • 2.2.0
    • None
    • SQL
    • None

    Description

      I have been seeing some issues with partitionBy for the dataframe writer. I currently have a file that is 6mb, just for testing, and it has around 1487 rows and 21 columns. There is nothing out of the ordinary with the columns, having either a DoubleType or StringType. The partitionBy calls two different partitions with verified low cardinality. One partition has 30 unique values and the other one has 2 unique values.

      ```scala
      df
      .write.partitionBy("first", "second")
      .mode(SaveMode.Overwrite)
      .parquet(s"$location$example/$corrId/")
      ```

      When running this example on Amazon's EMR with 5 r4.xlarges (30 gb of memory each), I am getting a java heap out of memory error. I have maximizeResourceAllocation set, and verified on the instances. I have even set it to false, explicitly set the driver and executor memory to 16g, but still had the same issue. Occasionally I get an error about disk space, and the job seems to work if I use an r3.xlarge (that has the ssd). But that seems weird that 6mb of data needs to spill to disk.

      The problem mainly seems to be centered around two + partitions vs 1. If I just use either of the partitions only, I have no problems. It's also worth noting that each of the partitions are evenly distributed.

      Attachments

        Activity

          People

            Unassigned Unassigned
            dmmiller612 Derek M Miller
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: