Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-5863

Improve performance of convertToScala codepath.

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • 1.2.0, 1.2.1
    • None
    • SQL
    • None

    Description

      Was doing some perf testing on reading parquet files and noticed that moving from Spark 1.1 to 1.2 the performance is 3x worse. In the profiler the culprit showed up as being in ScalaReflection.convertRowToScala.

      Particularly this zip is the issue:

      r.toSeq.zip(schema.fields.map(_.dataType))
      

      I see there's a comment on that currently that this is slow but it wasn't fixed. This actually produces a 3x degradation in parquet read performance, at least in my test case.

      Edit: the map is part of the issue as well. This whole code block is in a tight loop and allocates a new ListBuffer that needs to grow for each transformation. A possible solution is to change to using seq.view which would allocate iterators instead.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              copris Cristian Opris
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: