Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-13819

using a regexp_replace in a group by clause raises a nullpointerexception

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 1.6.0
    • None
    • SQL
    • None

    Description

      1. Start start-thriftserver.sh
      2. connect with beeline
      3. Perform the following query over a table:
      SELECT t0.textsample
      FROM test t0
      ORDER BY regexp_replace(
      t0.code,
      concat('
      Q', 'a', '
      E'),
      regexp_replace(
      regexp_replace('zz', '\\\\', '\\\\\\\\'),
      '
      $',
      '\\\\\\$')) DESC;
      Problem: NullPointerException

      Trace:

      java.lang.NullPointerException
      at org.apache.spark.sql.catalyst.expressions.RegExpReplace.nullSafeEval(regexpExpressions.scala:224)
      at org.apache.spark.sql.catalyst.expressions.TernaryExpression.eval(Expression.scala:458)
      at org.apache.spark.sql.catalyst.expressions.InterpretedOrdering.compare(ordering.scala:36)
      at org.apache.spark.sql.catalyst.expressions.InterpretedOrdering.compare(ordering.scala:27)
      at scala.math.Ordering$class.gt(Ordering.scala:97)
      at org.apache.spark.sql.catalyst.expressions.InterpretedOrdering.gt(ordering.scala:27)
      at org.apache.spark.RangePartitioner.getPartition(Partitioner.scala:168)
      at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1$$anonfun$4$$anonfun$apply$4.apply(Exchange.scala:180)
      at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1$$anonfun$4$$anonfun$apply$4.apply(Exchange.scala:180)
      at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.insertAll(BypassMergeSortShuffleWriter.java:119)
      at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      at org.apache.spark.scheduler.Task.run(Task.scala:88)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              jperezb Javier PĂ©rez
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: