Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-13346

Using DataFrames iteratively leads to slow query planning

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • 2.0.0
    • None
    • SQL

    Description

      I have an iterative algorithm based on DataFrames, and the query plan grows very quickly with each iteration. Caching the current DataFrame at the end of an iteration does not fix the problem. However, converting the DataFrame to an RDD and back at the end of each iteration does fix the problem.

      Printing the query plans shows that the plan explodes quickly (10 lines, to several hundred lines, to several thousand lines, ...) with successive iterations.

      The desired behavior is for the analyzer to recognize that a big chunk of the query plan does not need to be computed since it is already cached. The computation on each iteration should be the same.

      If useful, I can push (complex) code to reproduce the issue. But it should be simple to see if you create an iterative algorithm which produces a new DataFrame from an old one on each iteration.

      Attachments

        Activity

          People

            Unassigned Unassigned
            josephkb Joseph K. Bradley
            Votes:
            10 Vote for this issue
            Watchers:
            27 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: