Uploaded image for project: 'CarbonData'
  1. CarbonData
  2. CARBONDATA-3482

Null pointer exception when concurrent select queries are executed from different beeline terminals.

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.6.1
    • Component/s: None
    • Labels:
      None

      Description

      1. Beeline1: => create tables (1K )
        2. Beeline2 => insert into table t2 (only 1 records ) till 7K
        3. Concurrent queries
        q1 : select count from t1
        q2 : select * from t1 limit 1
        q3 : select count from t2
        q2 : select * from t2 limit 1

       

      Exception:

      java.lang.NullPointerException
      at org.apache.carbondata.core.indexstore.blockletindex.BlockDataMap.getFileFooterEntrySchema(BlockDataMap.java:1061)
      at org.apache.carbondata.core.indexstore.blockletindex.BlockDataMap.prune(BlockDataMap.java:727)
      at org.apache.carbondata.core.indexstore.blockletindex.BlockDataMap.prune(BlockDataMap.java:821)
      at org.apache.carbondata.core.indexstore.blockletindex.BlockletDataMapFactory.getAllBlocklets(BlockletDataMapFactory.java:446)
      at org.apache.carbondata.core.datamap.TableDataMap.pruneWithoutFilter(TableDataMap.java:156)
      at org.apache.carbondata.core.datamap.TableDataMap.prune(TableDataMap.java:143)
      at org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:563)
      at org.apache.carbondata.hadoop.api.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:471)
      at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:471)
      at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:199)
      at org.apache.carbondata.spark.rdd.CarbonScanRDD.internalGetPartitions(CarbonScanRDD.scala:141)
      at org.apache.carbondata.spark.rdd.CarbonRDD.getPartitions(CarbonRDD.scala:66)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:256)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:254)

        Attachments

          Activity

            People

            • Assignee:
              kunalkapoor Kunal Kapoor
              Reporter:
              kunalkapoor Kunal Kapoor
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Time Tracking

                Estimated:
                Original Estimate - Not Specified
                Not Specified
                Remaining:
                Remaining Estimate - 0h
                0h
                Logged:
                Time Spent - 5.5h
                5.5h