Uploaded image for project: 'CarbonData'
  1. CarbonData
  2. CARBONDATA-1807

Carbon1.3.0-Pre-AggregateTable - Pre-aggregate creation not throwing error for wrong syntax and results in further query failures

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 1.3.0
    • 1.3.0
    • data-load
    • Test - 3 node ant cluster

    Description

      Steps:
      Beeline:
      1. Create table and load with data
      2. create a pre-aggregate table with incorrect syntax.
      3. Run select count on aggregate table
      4. Run aggregate select query on main table

      Expected: Pre-aggregate table creation should have trhown syntax error
      Actual: Pre-aggregate table is shown successful, but aggregate query fails.

      Query:
      create table if not exists JL_r31
      (
      p_cap_time String,
      city String,
      product_code String,
      user_base_station String,
      user_belong_area_code String,
      user_num String,
      user_imsi String,
      user_id String,
      user_msisdn String,
      dim1 String,
      dim2 String,
      dim3 String,
      dim4 String,
      dim5 String,
      dim6 String,
      dim7 String,
      dim8 String,
      dim9 String,
      dim10 String,
      dim11 String,
      dim12 String,
      dim13 String,
      dim14 String,
      dim15 String,
      dim16 String,
      dim17 String,
      dim18 String,
      dim19 String,
      dim20 String,
      dim21 String,
      dim22 String,
      dim23 String,
      dim24 String,
      dim25 String,
      dim26 String,
      dim27 String,
      dim28 String,
      dim29 String,
      dim30 String,
      dim31 String,
      dim32 String,
      dim33 String,
      dim34 String,
      dim35 String,
      dim36 String,
      dim37 String,
      dim38 String,
      dim39 String,
      dim40 String,
      dim41 String,
      dim42 String,
      dim43 String,
      dim44 String,
      dim45 String,
      dim46 String,
      dim47 String,
      dim48 String,
      dim49 String,
      dim50 String,
      dim51 String,
      dim52 String,
      dim53 String,
      dim54 String,
      dim55 String,
      dim56 String,
      dim57 String,
      dim58 String,
      dim59 String,
      dim60 String,
      dim61 String,
      dim62 String,
      dim63 String,
      dim64 String,
      dim65 String,
      dim66 String,
      dim67 String,
      dim68 String,
      dim69 String,
      dim70 String,
      dim71 String,
      dim72 String,
      dim73 String,
      dim74 String,
      dim75 String,
      dim76 String,
      dim77 String,
      dim78 String,
      dim79 String,
      dim80 String,
      dim81 String,
      M1 double,
      M2 double,
      M3 double,
      M4 double,
      M5 double,
      M6 double,
      M7 double,
      M8 double,
      M9 double,
      M10 double )
      stored by 'org.apache.carbondata.format' TBLPROPERTIES('DICTIONARY_EXCLUDE'='user_num,user_imsi,user_ID,user_msisdn,user_base_station,user_belong_area_code','table_blocksize'='512');
      ---------+

      Result

      ---------+
      ---------+
      No rows selected (0.55 seconds)

      LOAD DATA inpath 'hdfs://hacluster/user/test/jin_test2.csv' into table JL_r31 options('DELIMITER'=',', 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','IS_EMPTY_DATA_BAD_RECORD'='TRUE','FILEHEADER'='p_cap_time,city,product_code,user_base_station,user_belong_area_code,user_num,user_imsi,user_id,user_msisdn,dim1,dim2,dim3,dim4,dim5,dim6,dim7,dim8,dim9,dim10,dim11,dim12,dim13,dim14,dim15,dim16,dim17,dim18,dim19,dim20,dim21,dim22,dim23,dim24,dim25,dim26,dim27,dim28,dim29,dim30,dim31,dim32,dim33,dim34,dim35,dim36,dim37,dim38,dim39,dim40,dim41,dim42,dim43,dim44,dim45,dim46,dim47,dim48,dim49,dim50,dim51,dim52,dim53,dim54,dim55,dim56,dim57,dim58,dim59,dim60,dim61,dim62,dim63,dim64,dim65,dim66,dim67,dim68,dim69,dim70,dim71,dim72,dim73,dim74,dim75,dim76,dim77,dim78,dim79,dim80,dim81,M1,M2,M3,M4,M5,M6,M7,M8,M9,M10');
      ---------+

      Result

      ---------+
      ---------+
      No rows selected (14.049 seconds)
      0: jdbc:hive2://10.18.98.136:23040> create datamap agr_JL_r31 ON TABLE JL_r31 USING 'org.apache.carbondta.datamap.AggregateDataMapHandler' as select user_num,user_imsi,sum(user_id),count(user_id) from JL_r31 group by user_num, user_imsi;
      ---------+

      Result

      ---------+
      ---------+
      No rows selected (0.397 seconds)
      0: jdbc:hive2://10.18.98.136:23040> select count from JL_r31_agr_JL_r31;
      Error: org.apache.spark.sql.AnalysisException: Table or view not found: JL_r31_agr_JL_r31; line 1 pos 21 (state=,code=0)
      0: jdbc:hive2://10.18.98.136:23040> select user_num,user_imsi,sum(user_id),count(user_id) from JL_r31 group by user_num, user_imsi;
      Error: java.lang.ClassCastException: org.apache.carbondata.core.metadata.schema.table.DataMapSchema cannot be cast to org.apache.carbondata.core.metadata.schema.table.AggregationDataMapSchema (state=,code=0)

      Driver Logs:
      2017-11-24 21:45:10,997 | INFO | [pool-23-thread-4] | Parsing command: create datamap agr_JL_r31 ON TABLE JL_r31 USING 'org.apache.carbondta.datamap.AggregateDataMapHandler' as select user_num,user_imsi,sum(user_id),count(user_id) from JL_r31 group by user_num, user_imsi | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:45:11,046 | INFO | [pool-23-thread-4] | pool-23-thread-4 Skip CarbonOptimizer | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,051 | INFO | [pool-23-thread-4] | 5: get_table : db=default tbl=jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
      2017-11-24 21:45:11,051 | INFO | [pool-23-thread-4] | ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
      2017-11-24 21:45:11,052 | INFO | [pool-23-thread-4] | 5: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:589)
      2017-11-24 21:45:11,055 | INFO | [pool-23-thread-4] | ObjectStore, initialize called | org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:289)
      2017-11-24 21:45:11,060 | INFO | [pool-23-thread-4] | Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing | org.datanucleus.util.Log4JLogger.info(Log4JLogger.java:77)
      2017-11-24 21:45:11,061 | INFO | [pool-23-thread-4] | Using direct SQL, underlying DB is MYSQL | org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:139)
      2017-11-24 21:45:11,062 | INFO | [pool-23-thread-4] | Initialized ObjectStore | org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:272)
      2017-11-24 21:45:11,084 | INFO | [pool-23-thread-4] | Parsing command: array<string> | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:45:11,094 | INFO | [pool-23-thread-4] | pool-23-thread-4 HDFS lock path:hdfs://hacluster/carbonstore/default/jl_r31/meta.lock | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,095 | INFO | [pool-23-thread-4] | pool-23-thread-4 Trying to acquire lock: org.apache.carbondata.core.locks.HdfsFileLock@7abafd48 | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,129 | INFO | [pool-23-thread-4] | pool-23-thread-4 Successfully acquired the lock org.apache.carbondata.core.locks.HdfsFileLock@7abafd48 | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,129 | INFO | [pool-23-thread-4] | pool-23-thread-4 HDFS lock path:hdfs://hacluster/carbonstore/default/jl_r31/droptable.lock | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,129 | INFO | [pool-23-thread-4] | pool-23-thread-4 Trying to acquire lock: org.apache.carbondata.core.locks.HdfsFileLock@650ff0aa | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,160 | INFO | [pool-23-thread-4] | pool-23-thread-4 Successfully acquired the lock org.apache.carbondata.core.locks.HdfsFileLock@650ff0aa | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,254 | INFO | [pool-23-thread-4] | Parsing command: `default`.`jl_r31` | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:45:11,269 | INFO | [pool-23-thread-4] | 5: get_table : db=default tbl=jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
      2017-11-24 21:45:11,270 | INFO | [pool-23-thread-4] | ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
      2017-11-24 21:45:11,288 | INFO | [pool-23-thread-4] | Parsing command: array<string> | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:45:11,293 | INFO | [pool-23-thread-4] | 5: get_table : db=default tbl=jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
      2017-11-24 21:45:11,294 | INFO | [pool-23-thread-4] | ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
      2017-11-24 21:45:11,311 | INFO | [pool-23-thread-4] | Parsing command: array<string> | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:45:11,319 | INFO | [pool-23-thread-4] | 5: get_database: default | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
      2017-11-24 21:45:11,320 | INFO | [pool-23-thread-4] | ugi=anonymous ip=unknown-ip-addr cmd=get_database: default | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
      2017-11-24 21:45:11,325 | INFO | [pool-23-thread-4] | 5: get_database: default | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
      2017-11-24 21:45:11,325 | INFO | [pool-23-thread-4] | ugi=anonymous ip=unknown-ip-addr cmd=get_database: default | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
      2017-11-24 21:45:11,330 | INFO | [pool-23-thread-4] | 5: get_tables: db=default pat=* | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
      2017-11-24 21:45:11,330 | INFO | [pool-23-thread-4] | ugi=anonymous ip=unknown-ip-addr cmd=get_tables: db=default pat=* | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
      2017-11-24 21:45:11,345 | INFO | [pool-23-thread-4] | pool-23-thread-4 Parent table updated is successful for table default.JL_r31 | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,360 | INFO | [pool-23-thread-4] | pool-23-thread-4 Deleted the lock file hdfs://hacluster/carbonstore/default/jl_r31/meta.lock | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,360 | INFO | [pool-23-thread-4] | pool-23-thread-4 Pre agg table lock released successfully | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,372 | INFO | [pool-23-thread-4] | pool-23-thread-4 Deleted the lock file hdfs://hacluster/carbonstore/default/jl_r31/droptable.lock | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,372 | INFO | [pool-23-thread-4] | pool-23-thread-4 Pre agg table lock released successfully | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
      2017-11-24 21:45:11,373 | AUDIT | [pool-23-thread-4] | [BLR1000014290][anonymous][Thread-171]DataMap agr_JL_r31 successfully added to Table JL_r31 | org.apache.carbondata.common.logging.impl.StandardLogService.audit(StandardLogService.java:207)
      2017-11-24 21:46:06,670 | INFO | [spark-dynamic-executor-allocation] | Request to remove executorIds: 5 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:06,670 | INFO | [spark-dynamic-executor-allocation] | Requesting to kill executor(s) 5 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:06,670 | INFO | [spark-dynamic-executor-allocation] | Actual list of executor(s) to be killed is 5 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:06,679 | INFO | [spark-dynamic-executor-allocation] | Removing executor 5 because it has been idle for 60 seconds (new desired total will be 1) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:07,682 | INFO | [spark-dynamic-executor-allocation] | Request to remove executorIds: 4 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:07,682 | INFO | [spark-dynamic-executor-allocation] | Requesting to kill executor(s) 4 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:07,683 | INFO | [spark-dynamic-executor-allocation] | Actual list of executor(s) to be killed is 4 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:07,691 | INFO | [spark-dynamic-executor-allocation] | Removing executor 4 because it has been idle for 60 seconds (new desired total will be 0) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,221 | INFO | [dispatcher-event-loop-1] | Disabling executor 5. | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,221 | INFO | [dag-scheduler-event-loop] | Executor lost: 5 (epoch 0) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,222 | INFO | [dispatcher-event-loop-5] | Trying to remove executor 5 from BlockManagerMaster. | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,222 | INFO | [dispatcher-event-loop-5] | Removing block manager BlockManagerId(5, BLR1000014291, 38929, None) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,222 | INFO | [dag-scheduler-event-loop] | Removed 5 successfully in removeExecutor | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,225 | INFO | [dispatcher-event-loop-0] | Executor 5 on BLR1000014291 killed by driver. | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,227 | INFO | [SparkListenerBus] | Existing executor 5 has been removed (new total is 1) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,670 | INFO | [dispatcher-event-loop-3] | Disabling executor 4. | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,671 | INFO | [dag-scheduler-event-loop] | Executor lost: 4 (epoch 0) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,671 | INFO | [dispatcher-event-loop-4] | Trying to remove executor 4 from BlockManagerMaster. | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,671 | INFO | [dispatcher-event-loop-4] | Removing block manager BlockManagerId(4, BLR1000014290, 55432, None) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,672 | INFO | [dag-scheduler-event-loop] | Removed 4 successfully in removeExecutor | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,675 | INFO | [dispatcher-event-loop-2] | Executor 4 on BLR1000014290 killed by driver. | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:46:08,677 | INFO | [SparkListenerBus] | Existing executor 4 has been removed (new total is 0) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:49:03,912 | INFO | [pool-23-thread-5] | Running query 'select count from JL_r31_agr_JL_r31' with 6d7abd93-6d45-4a6b-bcda-d31441763de5 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:49:03,913 | INFO | [pool-23-thread-5] | Parsing command: select count from JL_r31_agr_JL_r31 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:49:03,962 | INFO | [pool-23-thread-5] | 6: get_table : db=default tbl=jl_r31_agr_jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
      2017-11-24 21:49:03,963 | INFO | [pool-23-thread-5] | ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=jl_r31_agr_jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
      2017-11-24 21:49:03,963 | INFO | [pool-23-thread-5] | 6: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:589)
      2017-11-24 21:49:03,967 | INFO | [pool-23-thread-5] | ObjectStore, initialize called | org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:289)
      2017-11-24 21:49:03,972 | INFO | [pool-23-thread-5] | Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing | org.datanucleus.util.Log4JLogger.info(Log4JLogger.java:77)
      2017-11-24 21:49:03,973 | INFO | [pool-23-thread-5] | Using direct SQL, underlying DB is MYSQL | org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:139)
      2017-11-24 21:49:03,974 | INFO | [pool-23-thread-5] | Initialized ObjectStore | org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:272)
      2017-11-24 21:49:03,982 | INFO | [pool-23-thread-5] | 6: get_table : db=default tbl=jl_r31_agr_jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
      2017-11-24 21:49:03,982 | INFO | [pool-23-thread-5] | ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=jl_r31_agr_jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
      2017-11-24 21:49:03,987 | ERROR | [pool-23-thread-5] | Error executing query, currentState RUNNING, | org.apache.spark.internal.Logging$class.logError(Logging.scala:91)
      org.apache.spark.sql.AnalysisException: Table or view not found: JL_r31_agr_JL_r31; line 1 pos 21
      at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
      at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:459)
      at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:478)
      at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.apply

      2017-11-24 21:49:11,895 | INFO | [pool-23-thread-6] | Running query 'select user_num,user_imsi,sum(user_id),count(user_id) from JL_r31 group by user_num, user_imsi' with 95a0b546-381a-4300-9984-5ad53553036e | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:49:11,896 | INFO | [pool-23-thread-6] | Parsing command: select user_num,user_imsi,sum(user_id),count(user_id) from JL_r31 group by user_num, user_imsi | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:49:11,951 | INFO | [pool-23-thread-6] | 7: get_table : db=default tbl=jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
      2017-11-24 21:49:11,952 | INFO | [pool-23-thread-6] | ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=jl_r31 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
      2017-11-24 21:49:11,953 | INFO | [pool-23-thread-6] | 7: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:589)
      2017-11-24 21:49:11,956 | INFO | [pool-23-thread-6] | ObjectStore, initialize called | org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:289)
      2017-11-24 21:49:11,962 | INFO | [pool-23-thread-6] | Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing | org.datanucleus.util.Log4JLogger.info(Log4JLogger.java:77)
      2017-11-24 21:49:11,964 | INFO | [pool-23-thread-6] | Using direct SQL, underlying DB is MYSQL | org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:139)
      2017-11-24 21:49:11,964 | INFO | [pool-23-thread-6] | Initialized ObjectStore | org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:272)
      2017-11-24 21:49:11,985 | INFO | [pool-23-thread-6] | Parsing command: array<string> | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
      2017-11-24 21:49:12,129 | ERROR | [pool-23-thread-6] | Error executing query, currentState RUNNING, | org.apache.spark.internal.Logging$class.logError(Logging.scala:91)
      java.lang.ClassCastException: org.apache.carbondata.core.metadata.schema.table.DataMapSchema cannot be cast to org.apache.carbondata.core.metadata.schema.table.AggregationDataMapSchema
      at org.apache.carbondata.core.preagg.AggregateTableSelector.selectPreAggDataMapSchema(AggregateTableSelector.java:70)
      at org.apache.spark.sql.hive.CarbonPreAggregateQueryRules.apply(CarbonPreAggregateRules.scala:185)
      at org.apache.spark.sql.hive.CarbonPreAggregateQueryRules.apply(CarbonPreAggregateRules.scala:67)
      at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
      at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
      at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
      at scala.collection.immutable.List.foldLeft(List.scala:84)
      at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
      at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
      at scala.collection.immutable.List.foreach(List.scala:381)
      at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
      at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:64)
      at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:62)
      at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
      at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
      at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
      at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:220)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:160)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:422)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:173)
      at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

      Attachments

        Activity

          People

            kumarvishal09 Kumar Vishal
            Ram@huawei Ramakrishna S
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: