Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-10504

aggregate where NULL is defined as the value expression aborts when SUM used

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 1.3.1, 1.4.1
    • 1.5.0
    • SQL
    • None

    Description

      In ISO-SQL the context would determine an implicit type for NULL or one might find that a vendor requires an explicit type via CAST ( NULL as INTEGER). It appears that SPARK presumes a long type i.e. select min(NULL), max(NULL) but a query such the following aborts.

      select sum ( null ) from tversion

      Operation: execute
      Errors:
      org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5232.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5232.0 (TID 18531, sandbox.hortonworks.com): scala.MatchError: NullType (of class org.apache.spark.sql.types.NullType$)
      	at org.apache.spark.sql.catalyst.expressions.Cast.org$apache$spark$sql$catalyst$expressions$Cast$$cast(Cast.scala:403)
      	at org.apache.spark.sql.catalyst.expressions.Cast.cast$lzycompute(Cast.scala:422)
      	at org.apache.spark.sql.catalyst.expressions.Cast.cast(Cast.scala:422)
      	at org.apache.spark.sql.catalyst.expressions.Cast.eval(Cast.scala:426)
      	at org.apache.spark.sql.catalyst.expressions.Coalesce.eval(nullFunctions.scala:51)
      	at org.apache.spark.sql.catalyst.expressions.Add.eval(arithmetic.scala:119)
      	at org.apache.spark.sql.catalyst.expressions.Coalesce.eval(nullFunctions.scala:51)
      	at org.apache.spark.sql.catalyst.expressions.MutableLiteral.update(literals.scala:82)
      	at org.apache.spark.sql.catalyst.expressions.SumFunction.update(aggregates.scala:581)
      	at org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$6.apply(Aggregate.scala:133)
      	at org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$6.apply(Aggregate.scala:126)
      	at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
      	at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
      	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
      	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
      	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
      	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      	at org.apache.spark.scheduler.Task.run(Task.scala:64)
      	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      	at java.lang.Thread.run(Thread.java:745)
      

      Attachments

        Activity

          People

            yhuai Yin Huai
            the6campbells N Campbell
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: