Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-18857

SparkSQL ThriftServer hangs while extracting huge data volumes in incremental collect mode

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.0.2
    • 2.0.3, 2.1.1, 2.2.0
    • SQL
    • None

    Description

      We are trying to run a sql query on our spark cluster and extracting around 200 million records through SparkSQL ThriftServer interface. This query works fine for Spark 1.6.3 version, however for spark 2.0.2, thrift server hangs after fetching data from a few partitions (we are using incremental collect mode with 400 partitions). As per documentation max memory taken up by thrift server should be what is required by the biggest data partition. But we observed that Thrift server is not releasing the old partitions memory whenever the GC occurs even though it has moved to next partition data fetches. which is not the case with 1.6.3 version.

      On further investigation we found that SparkExecuteStatementOperation.scala was modified for "SPARK-16563[SQL] fix spark sql thrift server FetchResults bug" and result set iterator was duplicated to keep a reference to the first set.

      + val (itra, itrb) = iter.duplicate
      + iterHeader = itra
      + iter = itrb

      We suspect that this is resulting in the memory not being cleared on GC. To confirm this we created an iterator in our test class and fetched the data once without duplicating and second time with creating a duplicate. we could see that in first instance it ran fine and fetched the entire data set while in second instance driver hanged after fetching data from a few partitions.

      Attachments

        1. GC-spark-1.6.3
          9 kB
          vishal agrawal
        2. GC-spark-2.0.2
          7 kB
          vishal agrawal

        Issue Links

          Activity

            People

              dongjoon Dongjoon Hyun
              vishalagrwal vishal agrawal
              Votes:
              1 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: