Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-27764 Feature Parity between PostgreSQL and Spark
  3. SPARK-29702

Resolve group-by columns with integrity constraints

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.0.0
    • None
    • SQL
    • None

    Description

      In PgSQL, integrity constraints affect grouping column resolution in an analyzer;

      postgres=# \d gstest3
                    Table "public.gstest3"
       Column |  Type   | Collation | Nullable | Default 
      --------+---------+-----------+----------+---------
       a      | integer |           |          | 
       b      | integer |           |          | 
       c      | integer |           |          | 
       d      | integer |           |          | 
      
      postgres=# select a, d, grouping(a,b,c) from gstest3 group by grouping sets ((a,b), (a,c));
      ERROR:  column "gstest3.d" must appear in the GROUP BY clause or be used in an aggregate function
      LINE 1: select a, d, grouping(a,b,c) from gstest3 group by grouping ...
                        ^
      postgres=# alter table gstest3 add primary key (a);
      ALTER TABLE
      
      postgres=# select a, d, grouping(a,b,c) from gstest3 group by grouping sets ((a,b), (a,c));
       a | d | grouping 
      ---+---+----------
       1 | 1 |        1
       2 | 2 |        1
       1 | 1 |        2
       2 | 2 |        2
      (4 rows)
       

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              maropu Takeshi Yamamuro
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated: