Uploaded image for project: 'CarbonData'
  1. CarbonData
  2. CARBONDATA-4254

Fix random ci failures in alter add and SI

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • None
    • None

    Description

      Fix random ci failures in alter add and SI

      Below test cases currently ignoring as they are failing randomly.

       

       

       

      Error Message

      Job aborted due to stage failure: Task 1 in stage 1941.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1941.0 (TID 71657) (ubuntu executor driver): java.lang.RuntimeException: Failed to merge index files in path: /opt/jenkins/workspace/ApacheCarbon_PR_Builder_3.1/integration/spark/target/warehouse/nonindexmerge/Fact/Part0/Segment_1. Table status update with mergeIndex file has failed at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:120) at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:374) at org.apache.spark.rdd.CarbonMergeFilesRDD$$anon$2.<init>(CarbonMergeFilesRDD.scala:325) at org.apache.spark.rdd.CarbonMergeFilesRDD.internalCompute(CarbonMergeFilesRDD.scala:287) at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: Table status update with mergeIndex file has failed at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.writeMergeIndexFileBasedOnSegmentFile(CarbonIndexFileMergeWriter.java:315) at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:112) ... 14 more Driver stacktrace:

      Stacktrace

      org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1941.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1941.0 (TID 71657) (ubuntu executor driver): java.lang.RuntimeException: Failed to merge index files in path: /opt/jenkins/workspace/ApacheCarbon_PR_Builder_3.1/integration/spark/target/warehouse/nonindexmerge/Fact/Part0/Segment_1. Table status update with mergeIndex file has failed at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:120) at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:374) at org.apache.spark.rdd.CarbonMergeFilesRDD$$anon$2.<init>(CarbonMergeFilesRDD.scala:325) at org.apache.spark.rdd.CarbonMergeFilesRDD.internalCompute(CarbonMergeFilesRDD.scala:287) at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: Table status update with mergeIndex file has failed at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.writeMergeIndexFileBasedOnSegmentFile(CarbonIndexFileMergeWriter.java:315) at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:112) ... 14 more

       

       

       

      Error Message

      Job aborted due to stage failure: Task 0 in stage 7648.0 failed 1 times, most recent failure: Lost task 0.0 in stage 7648.0 (TID 107156) (ubuntu executor driver): java.lang.ArrayIndexOutOfBoundsException: 2 at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.genericGet(rows.scala:201) at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getAs(rows.scala:35) at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.isNullAt(rows.scala:36) at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.isNullAt$(rows.scala:36) at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.isNullAt(rows.scala:195) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.writeFields_0_1$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:346) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace:

      Stacktrace

      org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7648.0 failed 1 times, most recent failure: Lost task 0.0 in stage 7648.0 (TID 107156) (ubuntu executor driver): java.lang.ArrayIndexOutOfBoundsException: 2 at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.genericGet(rows.scala:201) at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getAs(rows.scala:35) at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.isNullAt(rows.scala:36) at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.isNullAt$(rows.scala:36) at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.isNullAt(rows.scala:195) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.writeFields_0_1$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:346) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)

       

       

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            maheshrajus Mahesh Raju Somalaraju
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: