Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
1.2.1
-
None
-
None
-
Hadoop 2.6.2+Hive 1.2.1
Description
After initiating HIVE ACID ORC table compaction, the CompactorMR job throws exception:
Error: java.lang.ArrayIndexOutOfBoundsException: 7
at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.<init>(TreeReaderFactory.java:1968)
at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.<init>(TreeReaderFactory.java:1969)
at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
at org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.createTreeReader(RecordReaderFactory.java:69)
at org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.<init>(RecordReaderImpl.java:202)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:539)
at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.<init>(OrcRawRecordMerger.java:183)
at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:466)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1308)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:512)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:491)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
As a result, we see hadoop exception stack,
297 failed with state FAILED due to: Task failed task_1458819387386_11297_m_000008
Job failed as tasks failed. failedMaps:1 failedReduces:0
2016-04-06 11:30:57,891 INFO [dn209006-27]: mapreduce.Job (Job.java:monitorAndPrintJob(1392)) - Counters: 14
Job Counters
Failed map tasks=16
Killed map tasks=7
Launched map tasks=23
Other local map tasks=13
Data-local map tasks=6
Rack-local map tasks=4
Total time spent by all maps in occupied slots (ms)=412592
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=206296
Total vcore-seconds taken by all map tasks=206296
Total megabyte-seconds taken by all map tasks=422494208
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
2016-04-06 11:30:57,891 ERROR [dn209006-27]: compactor.Worker (Worker.java:run(176)) - Caught exception while trying to compact lqz.my_orc_acid_table. Marking clean to avoid repeated failures, java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:186)
at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:162)
2016-04-06 11:30:57,894 ERROR [dn209006-27]: txn.CompactionTxnHandler (CompactionTxnHandler.java:markCleaned(327)) - Expected to remove at least one row from completed_txn_components when marking compaction entry as clean!
Attachments
Attachments
Issue Links
- relates to
-
HIVE-14004 Minor compaction produces ArrayIndexOutOfBoundsException: 7 in SchemaEvolution.getFileType
-
- Closed
-