Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-20086

issue with pyspark 2.1.0 window function

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.1.0
    • 2.1.1, 2.2.0
    • PySpark
    • None

    Description

      original post at
      stackoverflow

      I get error when working with pyspark window function. here is some example code:

      borderStyle=solid
          import pyspark
          import pyspark.sql.functions as sf
          import pyspark.sql.types as sparktypes
          from pyspark.sql import window
          
          sc = pyspark.SparkContext()
          sqlc = pyspark.SQLContext(sc)
          rdd = sc.parallelize([(1, 2.0), (1, 3.0), (1, 1.), (1, -2.), (1, -1.)])
          df = sqlc.createDataFrame(rdd, ["x", "AmtPaid"])
          df.show()
      
      

      gives:

      x AmtPaid
      1 2.0
      1 3.0
      1 1.0
      1 -2.0
      1 -1.0

      next, compute cumulative sum

      test.py
          win_spec_max = (window.Window
                          .partitionBy(['x'])
                          .rowsBetween(window.Window.unboundedPreceding, 0)))
          df = df.withColumn('AmtPaidCumSum',
                             sf.sum(sf.col('AmtPaid')).over(win_spec_max))
          df.show()
      

      gives,

      x AmtPaid AmtPaidCumSum
      1 2.0 2.0
      1 3.0 5.0
      1 1.0 6.0
      1 -2.0 4.0
      1 -1.0 3.0

      next, compute cumulative max,

          df = df.withColumn('AmtPaidCumSumMax', sf.max(sf.col('AmtPaidCumSum')).over(win_spec_max))
      
          df.show()
      

      gives error log

           Py4JJavaError: An error occurred while calling o2609.showString.
      
      
      with traceback:
      
      
          Py4JJavaErrorTraceback (most recent call last)
          <ipython-input-215-3106d06b6e49> in <module>()
          ----> 1 df.show()
      
          /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/dataframe.pyc in show(self, n, truncate)
              316         """
              317         if isinstance(truncate, bool) and truncate:
          --> 318             print(self._jdf.showString(n, 20))
              319         else:
              320             print(self._jdf.showString(n, int(truncate)))
      
          /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/java_gateway.pyc in __call__(self, *args)
             1131         answer = self.gateway_client.send_command(command)
             1132         return_value = get_return_value(
          -> 1133             answer, self.gateway_client, self.target_id, self.name)
             1134 
             1135         for temp_arg in temp_args:
      
          /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/utils.pyc in deco(*a, **kw)
               61     def deco(*a, **kw):
               62         try:
          ---> 63             return f(*a, **kw)
               64         except py4j.protocol.Py4JJavaError as e:
               65             s = e.java_exception.toString()
      
          /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name)
              317                 raise Py4JJavaError(
              318                     "An error occurred while calling {0}{1}{2}.\n".
          --> 319                     format(target_id, ".", name), value)
              320             else:
              321                 raise Py4JError(
      

      but interestingly enough, if i introduce another change before sencond window operation, say inserting a column then it does not give that error:

          df = df.withColumn('MaxBound', sf.lit(6.))
          df.show()
      
      x AmtPaid AmtPaidCumSum MaxBound
      1 2.0 2.0 6.0
      1 3.0 5.0 6.0
      1 1.0 6.0 6.0
      1 -2.0 4.0 6.0
      1 -1.0 3.0 6.0
          #then apply the second window operations
          df = df.withColumn('AmtPaidCumSumMax', sf.max(sf.col('AmtPaidCumSum')).over(win_spec_max))
          df.show()
      
      x AmtPaid AmtPaidCumSum MaxBound AmtPaidCumSumMax
      1 2.0 2.0 6.0 2.0
      1 3.0 5.0 6.0 5.0
      1 1.0 6.0 6.0 6.0
      1 -2.0 4.0 6.0 6.0
      1 -1.0 3.0 6.0 6.0

      I do not understand this behaviour

      well, so far so good, but then I try another operation then again get similar error:

          def _udf_compare_cumsum_sll(x):
              if x['AmtPaidCumSumMax'] >= x['MaxBound']:
                  output = 0
              else:
                  output = x['AmtPaid']
              return output
      
          udf_compare_cumsum_sll = sf.udf(_udf_compare_cumsum_sll, sparktypes.FloatType())
          df = df.withColumn('AmtPaidAdjusted', udf_compare_cumsum_sll(sf.struct([df[x] for x in df.columns])))
          df.show()
      

      gives,

          Py4JJavaErrorTraceback (most recent call last)
          <ipython-input-18-3106d06b6e49> in <module>()
          ----> 1 df.show()
      
          /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/dataframe.pyc in show(self, n, truncate)
              316         """
              317         if isinstance(truncate, bool) and truncate:
          --> 318             print(self._jdf.showString(n, 20))
              319         else:
              320             print(self._jdf.showString(n, int(truncate)))
      
          /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/java_gateway.pyc in __call__(self, *args)
             1131         answer = self.gateway_client.send_command(command)
             1132         return_value = get_return_value(
          -> 1133             answer, self.gateway_client, self.target_id, self.name)
             1134 
             1135         for temp_arg in temp_args:
      
          /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/utils.pyc in deco(*a, **kw)
               61     def deco(*a, **kw):
               62         try:
          ---> 63             return f(*a, **kw)
               64         except py4j.protocol.Py4JJavaError as e:
               65             s = e.java_exception.toString()
      
          /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name)
              317                 raise Py4JJavaError(
              318                     "An error occurred while calling {0}{1}{2}.\n".
          --> 319                     format(target_id, ".", name), value)
              320             else:
              321                 raise Py4JError(
      
          Py4JJavaError: An error occurred while calling o91.showString.
          : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 36.0 failed 1 times, most recent failure: Lost task 0.0 in stage 36.0 (TID 645, localhost, executor driver): org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree: AmtPaidCumSum#10
      

      I wonder if someone could reproduce this behaviour ...

      here is complete log ..

          Py4JJavaErrorTraceback (most recent call last)
          <ipython-input-69-3106d06b6e49> in <module>()
          ----> 1 df.show()
      
          /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/dataframe.pyc in show(self, n, truncate)
              316         """
              317         if isinstance(truncate, bool) and truncate:
          --> 318             print(self._jdf.showString(n, 20))
              319         else:
              320             print(self._jdf.showString(n, int(truncate)))
      
          /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/java_gateway.pyc in __call__(self, *args)
             1131         answer = self.gateway_client.send_command(command)
             1132         return_value = get_return_value(
          -> 1133             answer, self.gateway_client, self.target_id, self.name)
             1134
             1135         for temp_arg in temp_args:
      
          /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/utils.pyc in deco(*a, **kw)
               61     def deco(*a, **kw):
               62         try:
          ---> 63             return f(*a, **kw)
               64         except py4j.protocol.Py4JJavaError as e:
               65             s = e.java_exception.toString()
      
          /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name)
              317                 raise Py4JJavaError(
              318                     "An error occurred while calling {0}{1}{2}.\n".
          --> 319                     format(target_id, ".", name), value)
              320             else:
              321                 raise Py4JError(
      
          Py4JJavaError: An error occurred while calling o703.showString.
          : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 119.0 failed 1 times, most recent failure: Lost task 0.0 in stage 119.0 (TID 1817, localhost, executor driver): org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree: AmtPaidCumSum#2076
          	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
          	at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:88)
          	at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:87)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
          	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
          	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:287)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:293)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:293)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5$$anonfun$apply$11.apply(TreeNode.scala:360)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.immutable.List.foreach(List.scala:381)
          	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
          	at scala.collection.immutable.List.map(List.scala:285)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:358)
          	at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188)
          	at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:329)
          	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:293)
          	at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:277)
          	at org.apache.spark.sql.catalyst.expressions.BindReferences$.bindReference(BoundAttribute.scala:87)
          	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$$anonfun$bind$1.apply(GenerateMutableProjection.scala:38)
          	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$$anonfun$bind$1.apply(GenerateMutableProjection.scala:38)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
          	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
          	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
          	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
          	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.bind(GenerateMutableProjection.scala:38)
          	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.generate(GenerateMutableProjection.scala:44)
          	at org.apache.spark.sql.execution.SparkPlan.newMutableProjection(SparkPlan.scala:353)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1$1.apply(WindowExec.scala:203)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1$1.apply(WindowExec.scala:202)
          	at org.apache.spark.sql.execution.window.AggregateProcessor$.apply(AggregateProcessor.scala:98)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2.org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1(WindowExec.scala:198)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$6.apply(WindowExec.scala:225)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$6.apply(WindowExec.scala:222)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1$$anonfun$16.apply(WindowExec.scala:318)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1$$anonfun$16.apply(WindowExec.scala:318)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
          	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
          	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
          	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1.<init>(WindowExec.scala:318)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14.apply(WindowExec.scala:290)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14.apply(WindowExec.scala:289)
          	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
          	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
          	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
          	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
          	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
          	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
          	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
          	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
          	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
          	at org.apache.spark.scheduler.Task.run(Task.scala:99)
          	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
          	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
          	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
          	at java.lang.Thread.run(Thread.java:745)
          Caused by: java.lang.RuntimeException: Couldn't find AmtPaidCumSum#2076 in [sum#2299,max#2300,x#2066L,AmtPaid#2067]
          	at scala.sys.package$.error(package.scala:27)
          	at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:94)
          	at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:88)
          	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
          	... 62 more
      
          Driver stacktrace:
          	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
          	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
          	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
          	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
          	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
          	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
          	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
          	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
          	at scala.Option.foreach(Option.scala:257)
          	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
          	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
          	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
          	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
          	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
          	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
          	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
          	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
          	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
          	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:333)
          	at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
          	at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
          	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
          	at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
          	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
          	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
          	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113)
          	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
          	at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2795)
          	at org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
          	at org.apache.spark.sql.Dataset.take(Dataset.scala:2327)
          	at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
          	at sun.reflect.GeneratedMethodAccessor83.invoke(Unknown Source)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:498)
          	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
          	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
          	at py4j.Gateway.invoke(Gateway.java:280)
          	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
          	at py4j.commands.CallCommand.execute(CallCommand.java:79)
          	at py4j.GatewayConnection.run(GatewayConnection.java:214)
          	at java.lang.Thread.run(Thread.java:745)
          Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree: null
          	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
          	at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:88)
          	at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:87)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288)
          	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
          	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:287)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:293)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:293)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5$$anonfun$apply$11.apply(TreeNode.scala:360)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.immutable.List.foreach(List.scala:381)
          	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
          	at scala.collection.immutable.List.map(List.scala:285)
          	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:358)
          	at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188)
          	at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:329)
          	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:293)
          	at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:277)
          	at org.apache.spark.sql.catalyst.expressions.BindReferences$.bindReference(BoundAttribute.scala:87)
          	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$$anonfun$bind$1.apply(GenerateMutableProjection.scala:38)
          	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$$anonfun$bind$1.apply(GenerateMutableProjection.scala:38)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
          	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
          	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
          	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
          	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.bind(GenerateMutableProjection.scala:38)
          	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.generate(GenerateMutableProjection.scala:44)
          	at org.apache.spark.sql.execution.SparkPlan.newMutableProjection(SparkPlan.scala:353)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1$1.apply(WindowExec.scala:203)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1$1.apply(WindowExec.scala:202)
          	at org.apache.spark.sql.execution.window.AggregateProcessor$.apply(AggregateProcessor.scala:98)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2.org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1(WindowExec.scala:198)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$6.apply(WindowExec.scala:225)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$6.apply(WindowExec.scala:222)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1$$anonfun$16.apply(WindowExec.scala:318)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1$$anonfun$16.apply(WindowExec.scala:318)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
          	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
          	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
          	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
          	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1.<init>(WindowExec.scala:318)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14.apply(WindowExec.scala:290)
          	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14.apply(WindowExec.scala:289)
          	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
          	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
          	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
          	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
          	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
          	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
          	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
          	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
          	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
          	at org.apache.spark.scheduler.Task.run(Task.scala:99)
          	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
          	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
          	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
          	... 1 more
          Caused by: java.lang.RuntimeException: Couldn't find AmtPaidCumSum#2076 in [sum#2299,max#2300,x#2066L,AmtPaid#2067]
          	at scala.sys.package$.error(package.scala:27)
          	at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:94)
          	at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:88)
          	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
          	... 62 more
      

      Attachments

        Issue Links

          Activity

            People

              hvanhovell Herman van Hövell
              mandarup mandar upadhye
              Votes:
              1 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: