Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
3.2.0
-
None
-
None
Description
Steps to the repo:
1. Create External partition table
2. Remove some partition manually be using hdfs dfs -rm command
3. run "MSCK REPAIR.. DROP Partitions" and it will fail with following exception
2020-07-06 10:42:11,434 WARN org.apache.hadoop.hive.metastore.utils.RetryUtilities$ExponentiallyDecayingBatchWork: [HiveServer2-Background-Pool: Thread-210]: Exception thrown while processing using a batch size 2 org.apache.hadoop.hive.metastore.utils.MetastoreException: MetaException(message:Index: 117, Size: 0) at org.apache.hadoop.hive.metastore.Msck$2.execute(Msck.java:479) ~[hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.metastore.Msck$2.execute(Msck.java:432) ~[hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.metastore.utils.RetryUtilities$ExponentiallyDecayingBatchWork.run(RetryUtilities.java:91) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.metastore.Msck.dropPartitionsInBatches(Msck.java:496) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.metastore.Msck.repair(Msck.java:223) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.ddl.misc.msck.MsckOperation.execute(MsckOperation.java:74) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.ddl.DDLTask.execute(DDLTask.java:80) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:359) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:721) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:488) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:482) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) [hive-exec-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:225) [hive-service-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) [hive-service-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:322) [hive-service-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at java.security.AccessController.doPrivileged(Native Method) [?:1.8.0_242] at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_242] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) [hadoop-common-3.1.1.7.1.1.0-565.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:340) [hive-service-3.1.3000.7.1.1.0-565.jar:3.1.3000.7.1.1.0-565] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_242] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_242] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_242] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_242] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_242] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_242] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242] // Caused by java.lang.IndexOutOfBoundsException: Index: 117, Size: 0 at java.util.ArrayList.rangeCheck(ArrayList.java:657) at java.util.ArrayList.get(ArrayList.java:433) at com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:60) at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:834) at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:684)
The reason it seems failing is because we are serializing it as String while during deserialization we are expecting ExprNodeGenericFuncDesc, the other reason could be sterilization difference at request, I think partexpr should be serialized with the kryo at droprequest which Is not the case here.
Attachments
Issue Links
- is fixed by
-
HIVE-22957 Support Partition Filtering In MSCK REPAIR TABLE Command
-
- Closed
-