Description
The current implement of regexp_extract will throws a unprocessed exception show below:
SELECT regexp_extract('1a 2b 14m', '
d+')
[info] org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 22.0 failed 1 times, most recent failure: Lost task 1.0 in stage 22.0 (TID 33, 192.168.1.6, executor driver): java.lang.IndexOutOfBoundsException: No group 1 [info] at java.util.regex.Matcher.group(Matcher.java:538) [info] at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) [info] at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) [info] at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729) [info] at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) [info] at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) [info] at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804) [info] at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227) [info] at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227) [info] at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2156) [info] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) [info] at org.apache.spark.scheduler.Task.run(Task.scala:127) [info] at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444) [info] at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) [info] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447) [info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [info] at java.lang.Thread.run(Thread.java:748)
I think should treat this exception well.