Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
1.3.0
-
None
Description
Exception displays after performing select query on newly added Boolean data type
Steps to Reproduce:
1) Create Table:
CREATE TABLE uniqdata_carbon1 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 int)stored by 'carbondata' tblproperties('no_inverted_index'='cust_id ')
2)Load Data into the table:
LOAD DATA INPATH 'hdfs://localhost:54310/Data/uniqdata/2000_UniqData.csv' into table uniqdata_carbon1 OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE')
3) Execute Alter Query:
alter table uniqdata_carbon1 add columns (booleanfield boolean) tblproperties('default.value.booleanfield'='true');
4) execute desc table:
desc uniqdata_carbon1
Output:
---------------------------------------------+
col_name | data_type | comment |
---------------------------------------------+
cust_id | int | NULL |
cust_name | string | NULL |
active_emui_version | string | NULL |
dob | timestamp | NULL |
doj | timestamp | NULL |
bigint_column1 | bigint | NULL |
bigint_column2 | bigint | NULL |
decimal_column1 | decimal(30,10) | NULL |
decimal_column2 | decimal(36,10) | NULL |
double_column1 | double | NULL |
double_column2 | double | NULL |
integer_column1 | int | NULL |
booleanfield | boolean | NULL |
---------------------------------------------+
5) Execute Select Query on added Column:
select booleanfield from uniqdata_carbon1
Actual Output:
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 71.0 failed 1 times, most recent failure: Lost task 0.0 in stage 71.0 (TID 322, localhost, executor driver): java.lang.NumberFormatException: For input string: "true"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
at java.lang.Double.parseDouble(Double.java:538)
at java.lang.Double.valueOf(Double.java:502)
at org.apache.carbondata.core.scan.executor.util.RestructureUtil.getMeasureDefaultValue(RestructureUtil.java:306)
at org.apache.carbondata.core.scan.executor.util.RestructureUtil.createMeasureInfoAndGetCurrentBlockQueryMeasures(RestructureUtil.java:395)
at org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getCurrentBlockQueryMeasures(AbstractQueryExecutor.java:504)
at org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getBlockExecutionInfoForBlock(AbstractQueryExecutor.java:308)
at org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getBlockExecutionInfos(AbstractQueryExecutor.java:258)
at org.apache.carbondata.core.scan.executor.impl.VectorDetailQueryExecutor.execute(VectorDetailQueryExecutor.java:36)
at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.initialize(VectorizedCarbonRecordReader.java:123)
at org.apache.carbondata.spark.rdd.CarbonScanRDD.internalCompute(CarbonScanRDD.scala:365)
at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace: (state=,code=0)
Expected Output: It should display correct output.
Attachments
Attachments
Issue Links
- links to