Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-24613

Cache with UDF could not be matched with subsequent dependent caches

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.3.0
    • Fix Version/s: 2.3.2, 2.4.0
    • Component/s: SQL
    • Labels:
      None

      Description

      When caching a query, we generate its execution plan from the query's logical plan. However, the logical plan we get from the Dataset has already been analyzed, and when we try the get the execution plan, this already analyzed logical plan will be analyzed again in the new QueryExecution object, and unfortunately some rules have side effects if applied multiple times, which in this case, is the HandleNullInputsForUDF rule. The re-analyzed plan now has an extra null-check and can't be matched against the same plan. The following test would fail since df2's execution plan inside the CacheManager does not depend on df1.

      test("cache UDF result correctly 2") {
        val expensiveUDF = udf({x: Int => Thread.sleep(10000); x})
        val df = spark.range(0, 10).toDF("a").withColumn("b", expensiveUDF($"a"))
        val df2 = df.agg(sum(df("b")))
      
        df.cache()
        df.count()
        df2.cache()
      
        // udf has been evaluated during caching, and thus should not be re-evaluated here
        failAfter(5 seconds) {
          df2.collect()
        }
      }
      

      While it might be worth re-visiting such analysis rules, we can make also fix the CacheManager to avoid these potential problems.

        Attachments

          Activity

            People

            • Assignee:
              maryannxue Maryann Xue
              Reporter:
              maryannxue Maryann Xue
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: