Details
-
Bug
-
Status: Resolved
-
Trivial
-
Resolution: Fixed
-
1.2.0
-
None
-
spark2.1,hadoop2.7
Description
steps to reproduce
0: jdbc:hive2://localhost:10000> CREATE TABLE uniqData_t17(ID Int, date Timestamp, country String,name String, phonetype String, serialname String, salary Int)
0: jdbc:hive2://localhost:10000> STORED BY 'CARBONDATA' TBLPROPERTIES('bucketnumber'='0', 'bucketcolumns'='name','DICTIONARY_INCLUDE'='NAME');
---------+
Result |
---------+
---------+
No rows selected (0.501 seconds)
0: jdbc:hive2://localhost:10000> load data inpath 'hdfs://localhost:54310/dataDiff1.csv' into table uniqData_t17 OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='ID,date,country,name,phonetype,serialname,salary');
Error: java.lang.Exception: DataLoad failure (state=,code=0)
logs
7/08/31 12:17:07 WARN CarbonDataProcessorUtil: [Executor task launch worker-9][partitionID:default_uniqdata_t17_578e819e-bec8-49e5-a292-890db623e116] sort scope is set to LOCAL_SORT
17/08/31 12:17:07 ERROR DataLoadExecutor: [Executor task launch worker-9][partitionID:default_uniqdata_t17_578e819e-bec8-49e5-a292-890db623e116] Data Loading failed for table uniqdata_t17
java.lang.ArithmeticException: / by zero
at org.apache.carbondata.processing.newflow.sort.impl.ParallelReadMergeSorterWithBucketingImpl.initialize(ParallelReadMergeSorterWithBucketingImpl.java:78)
it should give meaningfull exception such as number of bucket can not be zero
Attachments
Issue Links
- links to