Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
4.0.0
Description
If first attempt of compaction task is pre-empted by yarn or execution failed because of environmental issues, re-attempted tasks will fail with FileAlreadyExistsException
Error: org.apache.hadoop.fs.FileAlreadyExistsException: /warehouse/tablespace/managed/hive/test.db/acid_table/dept=cse/_tmp_xxx/delete_delta_0000001_0000010/bucket_00000 at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:380) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2453) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2351) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:774) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:462) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:278) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1211) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1190) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:531) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:528) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:542) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:469) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) at org.apache.orc.impl.PhysicalFsWriter.<init>(PhysicalFsWriter.java:95) at org.apache.orc.impl.WriterImpl.<init>(WriterImpl.java:177) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.<init>(WriterImpl.java:94) at org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:378) at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRawRecordWriter(OrcOutputFormat.java:299) at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.getDeleteEventWriter(CompactorMR.java:1084) at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:995) at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:958)
Attachments
Issue Links
- is cloned by
-
HIVE-18044 CompactorMR.CompactorOutputCommitter.abortTask() not implemented
- Resolved
- relates to
-
HIVE-23058 Compaction task reattempt fails with FileAlreadyExistsException
- Closed
- links to