Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 1.5.0, 1.4.1
    • None
    • None

    Description

      In test output, sometimes there are error logs like:
      ```
      org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 805.0 failed 1 times, most recent failure: Lost task 3.0 in stage 805.0 (TID 5751, localhost, executor driver): java.lang.NegativeArraySizeException
      at java.util.AbstractCollection.toArray(AbstractCollection.java:136)
      at java.util.ArrayList.<init>(ArrayList.java:177)
      at org.apache.carbondata.datamap.bloom.BloomCoarseGrainDataMapFactory.clear(BloomCoarseGrainDataMapFactory.java:340)
      at org.apache.carbondata.core.datamap.TableDataMap.clear(TableDataMap.java:206)
      at org.apache.carbondata.core.datamap.DataMapStoreManager.clearDataMaps(DataMapStoreManager.java:430)
      at org.apache.carbondata.core.datamap.DistributableDataMapFormat$1.initialize(DistributableDataMapFormat.java:125)
      at org.apache.carbondata.spark.rdd.DataMapPruneRDD.internalCompute(SparkDataMapJob.scala:73)
      ```

      This error is caused by concurrent clear for datamaps, can refer to BlockletDataMapFactory at issue 2496 (PR2324)

      Attachments

        Issue Links

          Activity

            People

              xuchuanyin Chuanyin Xu
              xuchuanyin Chuanyin Xu
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 2.5h
                  2.5h