Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-35553 Improve correlated subqueries
  3. SPARK-40862

Unexpected operators when rewriting scalar subqueries with non-deterministic expressions

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.4.0
    • 3.4.0
    • SQL
    • None

    Description

      Since SPARK-28379, Spark has supported non-aggregated single-row correlated subqueries. SPARK-40800 handles the majority of the cases where projects can be collapsed. But Spark can throw exceptions for single-row subqueries with non-deterministic expressions. For example:

      CREATE TEMP VIEW t1 AS SELECT ARRAY('a', 'b') a 
      
      SELECT (
        SELECT array_sort(a, (i, j) -> rank[i] - rank[j])[0] + r + r AS sorted
        FROM (SELECT MAP('a', 1, 'b', 2) rank, rand() as r)
      ) FROM t1

      This throws an exception:

      Unexpected operator Join Inner
      :- Aggregate [[a,b]], [[a,b] AS a#253]
      :  +- OneRowRelation
      +- Project [map(keys: [a,b], values: [1,2]) AS rank#241, rand(86882494013664043) AS r#242]
         +- OneRowRelation
       in correlated subquery

      This is because when Spark rewrites correlated subqueries, it checks whether a scalar subquery is subject to the COUNT bug. It splits the query into parts above the aggregate, the aggregate, and the parts below the aggregate (see `splitSubquery` in the `RewriteCorrelatedScalarSubquery` rule). 

      This pattern is very restrictive and does not work well with non-aggregated single-row subqueries. We should fix this issue.

      Attachments

        Activity

          People

            allisonwang-db Allison Wang
            allisonwang-db Allison Wang
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: