Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Not A Problem
-
1.6.0
-
None
-
None
-
cloudera: Cloudera Express 5.10.0
java: HotSpot 1.8.0_77
spark: spark-core_2.10-1.6.0-cdh5.7.0.jar
hadoop: 2.6.0-cdh5.7.0 from c00978c67b0d3fe9f3b896b5030741bd40bf541
hdfs: 2.6.0-cdh5.7.0 from rc00978c67b0d3fe9f3b896b5030741bd40bf541a
yarn: 2.6.0-cdh5.7.0 from c00978c67b0d3fe9f3b896b5030741bd40bf541acloudera: Cloudera Express 5.10.0 java: HotSpot 1.8.0_77 spark: spark-core_2.10-1.6.0-cdh5.7.0.jar hadoop: 2.6.0-cdh5.7.0 from c00978c67b0d3fe9f3b896b5030741bd40bf541 hdfs: 2.6.0-cdh5.7.0 from rc00978c67b0d3fe9f3b896b5030741bd40bf541a yarn: 2.6.0-cdh5.7.0 from c00978c67b0d3fe9f3b896b5030741bd40bf541a
Description
Spark application save into output folder not all files, for example only files from 'part-r-00101.avro' to 'part-r-00127.avro', but must be from 'part-r-0000.avro' to 'part-r-00127.avro'. It looks like all files was stored into _temporary/... but when time to move results to output folder was come, files has disappeared from _temporary. In "executiors" logs I saw that all task was committed with FileOutputCommitter. There was not tasks preemptions and speculation.
Saving to hdfs like this:
rdd .map(v => new AvroKey[V](v) -> null) .saveAsNewAPIHadoopFile( directory, classOf[AvroKey[V]], classOf[NullWritable], classOf[AvroKeyOutputFormat[V]], createJob().getConfiguration )
For files that appear in output folder, in logs there is exceptions like this:
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/USER/DATA/dt=2017-07-02--20-03-14-415/_temporary/0/_temporary/attempt_201707022303_0011_r_000082_0/part-r-00082.avro (inode 35903648): File does not exist. Holder DFSClient_NONMAPREDUCE_-1729744390_72 does not have any open files. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3597) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3400) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3256) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
For files that not appears in output folder, there is nothing in hdfs logs or spark logs.
This problem appears about 1 time from 5.
Attachments
Issue Links
- relates to
-
SPARK-2984 FileNotFoundException on _temporary directory
- Resolved