Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-40562

Add spark.sql.legacy.groupingIdWithAppendedUserGroupBy

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.3.1, 3.2.3, 3.4.0
    • 3.3.1, 3.2.3, 3.4.0
    • SQL
    • None

    Description

      scala> sql("SELECT count(*), grouping__id from (VALUES (1,1,1),(2,2,2)) AS t(k1,k2,v) GROUP BY k1 GROUPING SETS (k2) ").show()
      +--------+------------+
      |count(1)|grouping__id|
      +--------+------------+
      |       1|           2|
      |       1|           2|
      +--------+------------+
      
      scala> sql("set spark.sql.legacy.groupingIdWithAppendedUserGroupBy=true")
      res1: org.apache.spark.sql.DataFrame = [key: string, value: string]scala> 
      
      sql("SELECT count(*), grouping__id from (VALUES (1,1,1),(2,2,2)) AS t(k1,k2,v) GROUP BY k1 GROUPING SETS (k2) ").show()
      +--------+------------+
      |count(1)|grouping__id|
      +--------+------------+
      |       1|           1|
      |       1|           1|
      +--------+------------+ 

      Attachments

        Issue Links

          Activity

            People

              dongjoon Dongjoon Hyun
              dongjoon Dongjoon Hyun
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: