Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-17760

DataFrame's pivot doesn't see column created in groupBy

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.0
    • Fix Version/s: 2.0.3, 2.1.0
    • Component/s: PySpark
    • Labels:
    • Environment:

      Databrick's community version, spark 2.0.0, pyspark, python 2.

      Description

      Related to https://stackoverflow.com/questions/39817993/pivoting-with-missing-values. I'm not completely sure if this is a bug or expected behavior.

      When you `groypBy` by a column generated inside of it, the `pivot` method apparently doesn't find this column during the analysis.

      E.g.

      df = (sc.parallelize([(1.0, "2016-03-30 01:00:00"), 
                            (30.2, "2015-01-02 03:00:02")])
              .toDF(["amount", "Date"])
              .withColumn("Date", col("Date").cast("timestamp")))
      
      (df.withColumn("hour",hour("date"))
         .groupBy(dayofyear("date").alias("date"))
         .pivot("hour").sum("amount").show())

      Shows the following exception.

      AnalysisException: u'resolved attribute(s) date#140688 missing from dayofyear(date)#140994,hour#140977,sum(`amount`)#140995 in operator !Aggregate [dayofyear(cast(date#140688 as date))], [dayofyear(cast(date#140688 as date)) AS dayofyear(date)#140994, pivotfirst(hour#140977, sum(`amount`)#140995, 1, 3, 0, 0) AS __pivot_sum(`amount`) AS `sum(``amount``)`#141001];'

      To solve it you have to add the column date before grouping and pivoting.

        Attachments

          Activity

            People

            • Assignee:
              a1ray Andrew Ray
              Reporter:
              Bonsanto Alberto Bonsanto
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: