Details
-
Bug
-
Status: Patch Available
-
Major
-
Resolution: Unresolved
-
2.4.0
-
None
-
None
Description
Per https://builds.apache.org/job/Hadoop-Hdfs-trunk/1774/testReport, we had the following failure. Local rerun is successful
Regression org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt Failing for the past 1 build (Since Failed#1774 ) Took 50 sec. Error Message test timed out after 50000 milliseconds Stacktrace java.lang.Exception: test timed out after 50000 milliseconds at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2024) at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2008) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2107) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:98) at org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:133)
See relevant exceptions in log
2014-06-14 11:56:15,283 WARN datanode.DataNode (BlockReceiver.java:verifyChunks(404)) - Checksum error in block BP-1675558312-67.195.138.30-1402746971712:blk_1073741825_1001 from /127.0.0.1:41708 org.apache.hadoop.fs.ChecksumException: Checksum error: DFSClient_NONMAPREDUCE_-1139495951_8 at 64512 exp: 1379611785 got: -12163112 at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:353) at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:284) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:402) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:537) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:734) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:741) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:234) at java.lang.Thread.run(Thread.java:662) 2014-06-14 11:56:15,285 WARN datanode.DataNode (BlockReceiver.java:run(1207)) - IOException in BlockReceiver.run(): java.io.IOException: Shutting down writer and responder due to a checksum error in received data. The error response has been sent upstream. at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1352) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1278) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1199) at java.lang.Thread.run(Thread.java:662) ...
Attachments
Attachments
Issue Links
- relates to
-
HDFS-10549 Correctly revoke file leases when closing files
- Resolved