Details
-
Bug
-
Status: Open
-
Minor
-
Resolution: Unresolved
-
1.6.0
-
None
-
None
-
Spark 2.1
Description
Test steps :
In Spark 2.1 beeline user creates a carbon table and loads data.
create table Test_Boundary (c1_int int,c2_Bigint Bigint,c3_Decimal Decimal(38,38),c4_double double,c5_string string,c6_Timestamp Timestamp,c7_Datatype_Desc string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('inverted_index'='c1_int,c2_Bigint,c5_string,c6_Timestamp','sort_columns'='c1_int,c2_Bigint,c5_string,c6_Timestamp');
LOAD DATA INPATH 'hdfs://hacluster/chetan/Test_Data1.csv' INTO table Test_Boundary OPTIONS('DELIMITER'=',','QUOTECHAR'='','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='');
From hive beeline user creates a hive table from the already created carbon table using CarbonHiveSerDe.
CREATE TABLE IF NOT EXISTS Test_Boundary1 (c1_int int,c2_Bigint Bigint,c3_Decimal Decimal(38,38),c4_double double,c5_string string,c6_Timestamp Timestamp,c7_Datatype_Desc string) ROW FORMAT SERDE 'org.apache.carbondata.hive.CarbonHiveSerDe' WITH SERDEPROPERTIES ('mapreduce.input.carboninputformat.databaseName'='default','mapreduce.input.carboninputformat.tableName'='Test_Boundary') STORED AS INPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonInputFormat' OUTPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonOutputFormat' LOCATION 'hdfs://hacluster//user/hive/warehouse/carbon.store/default/test_boundary';
User executes below select aggregation query on the hive table.
select min(c3_Decimal),max(c3_Decimal),sum(c3_Decimal),avg(c3_Decimal) , count(c3_Decimal), variance(c3_Decimal) from test_boundary1 where exp(c1_int)=0.0 or exp(c1_int)=1.0;
select min(c3_Decimal),max(c3_Decimal),sum(c3_Decimal),avg(c3_Decimal) , count(c3_Decimal), variance(c3_Decimal) from test_boundary1 where log(c1_int,1)=0.0 or log(c1_int,1) IS NULL;
select min(c3_Decimal),max(c3_Decimal),sum(c3_Decimal),avg(c3_Decimal) , count(c3_Decimal), variance(c3_Decimal) from test_boundary1 where pmod(c1_int,1)=0 or pmod(c1_int,1)IS NULL;
Actual Result : Select aggregation query with filter fails on hive table with decimal type using CarbonHiveSerDe in Spark 2.1
Expected Result : Select aggregation query with filter should be success on hive table with decimal type using CarbonHiveSerDe in Spark 2.1