XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.0.0
    • None
    • SQL
    • None

    Description

      PgSQL can accept a query below with an empty grouping expr, but Spark cannot;

      postgres=# create table gstest2 (a integer, b integer, c integer, d integer, e integer, f integer, g integer, h integer);
      postgres=# insert into gstest2 values
      postgres-#   (1, 1, 1, 1, 1, 1, 1, 1),
      postgres-#   (1, 1, 1, 1, 1, 1, 1, 2),
      postgres-#   (1, 1, 1, 1, 1, 1, 2, 2),
      postgres-#   (1, 1, 1, 1, 1, 2, 2, 2),
      postgres-#   (1, 1, 1, 1, 2, 2, 2, 2),
      postgres-#   (1, 1, 1, 2, 2, 2, 2, 2),
      postgres-#   (1, 1, 2, 2, 2, 2, 2, 2),
      postgres-#   (1, 2, 2, 2, 2, 2, 2, 2),
      postgres-#   (2, 2, 2, 2, 2, 2, 2, 2);
      INSERT 0 9
      
      postgres=# select v.c, (select count(*) from gstest2 group by () having v.c) from (values (false),(true)) v(c) order by v.c;
       c | count 
      ---+-------
       f |      
       t |    18
      (2 rows)
      
      scala> sql("""select v.c, (select count(*) from gstest2 group by () having v.c) from (values (false),(true)) v(c) order by v.c""").show
      org.apache.spark.sql.catalyst.parser.ParseException:
      no viable alternative at input '()'(line 1, pos 52)
      
      == SQL ==
      select v.c, (select count(*) from gstest2 group by () having v.c) from (values (false),(true)) v(c) order by v.c
      ----------------------------------------------------^^^
      
        at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:268)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:135)
        at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:85)
        at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:605)
        at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:605)
        ... 47 elided
      

      Attachments

        Activity

          People

            Unassigned Unassigned
            maropu Takeshi Yamamuro
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: