Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.24.0, 0.23.1
    • Fix Version/s: 2.0.0-alpha
    • Component/s: datanode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      STEP:
      1, deploy a single node hdfs 0.23.1 cluster and configure hdfs as:
      A) enable webhdfs
      B) enable append
      C) disable permissions
      2, start hdfs
      3, run the test script as attached

      RESULT:
      expected: a file named testFile should be created and populated with 32K * 5000 zeros, HDFS should be OK.
      I got: script cannot be finished, file has been created but not be populated as expected, actually append operation failed.

      Datanode log shows that, blockscaner report a bad replica and nanenode decide to delete it. Since it is a single node cluster, append fail. It makes no sense that the script failed every time.

      Datanode and Namenode logs are attached.

      1. hadoop-wangzw-datanode-ubuntu.log
        782 kB
        Zhanwei Wang
      2. hadoop-wangzw-namenode-ubuntu.log
        797 kB
        Zhanwei Wang
      3. HDFS-3100.patch
        6 kB
        Brandon Li
      4. HDFS-3100.patch
        6 kB
        Brandon Li
      5. HDFS-3100.patch
        7 kB
        Brandon Li
      6. HDFS-3100.patch
        7 kB
        Brandon Li
      7. HDFS-3100.patch
        6 kB
        Brandon Li
      8. test.sh
        0.4 kB
        Zhanwei Wang
      9. testAppend.patch
        3 kB
        Tsz Wo Nicholas Sze

        Issue Links

          Activity

          Hide
          Zhanwei Wang added a comment -

          test script and logs

          Show
          Zhanwei Wang added a comment - test script and logs
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Unfortunately, this is not specific to WebHDFS. HDFS also fails with the test.

          testAppend.patch: unit tests similar to Zhanwei's script.

          Show
          Tsz Wo Nicholas Sze added a comment - Unfortunately, this is not specific to WebHDFS. HDFS also fails with the test. testAppend.patch: unit tests similar to Zhanwei's script.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          It looks like that the BlockPoolSliceScanner incorrectly makes the replica as corrupted.

          2012-03-15 13:26:27,317 WARN  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:verifyBlock(419)) - First Verification failed for BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083
          java.io.IOException: Stream closed
          	at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:134)
          	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
          	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
          	at java.io.DataInputStream.readShort(DataInputStream.java:295)
          	at org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:78)
          	at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:228)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:378)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:463)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:594)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:570)
          	at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:95)
          	at java.lang.Thread.run(Thread.java:680)
          2012-03-15 13:26:27,320 WARN  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:verifyBlock(419)) - Second Verification failed for BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083
          java.io.IOException: Stream closed
          	at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:134)
          	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
          	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
          	at java.io.DataInputStream.readShort(DataInputStream.java:295)
          	at org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:78)
          	at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:228)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:378)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:463)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:594)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:570)
          	at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:95)
          	at java.lang.Thread.run(Thread.java:680)
          2012-03-15 13:26:27,320 WARN  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:addBlock(234)) - Adding an already existing block BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1084
          2012-03-15 13:26:27,320 INFO  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:handleScanFailure(301)) - Reporting bad block BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083
          2012-03-15 13:26:27,321 INFO  DataNode.clienttrace (BlockReceiver.java:run(1062)) - src: /127.0.0.1:54170, dest: /127.0.0.1:54083, bytes: 84992, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_2012624116_1, offset: 0, srvID: DS-600201831-10.10.10.105-54083-1331843
          
          Show
          Tsz Wo Nicholas Sze added a comment - It looks like that the BlockPoolSliceScanner incorrectly makes the replica as corrupted. 2012-03-15 13:26:27,317 WARN datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:verifyBlock(419)) - First Verification failed for BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083 java.io.IOException: Stream closed at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:134) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:237) at java.io.DataInputStream.readShort(DataInputStream.java:295) at org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:78) at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:228) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:378) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:463) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:594) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:570) at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:95) at java.lang.Thread.run(Thread.java:680) 2012-03-15 13:26:27,320 WARN datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:verifyBlock(419)) - Second Verification failed for BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083 java.io.IOException: Stream closed at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:134) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:237) at java.io.DataInputStream.readShort(DataInputStream.java:295) at org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:78) at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:228) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:378) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:463) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:594) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:570) at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:95) at java.lang.Thread.run(Thread.java:680) 2012-03-15 13:26:27,320 WARN datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:addBlock(234)) - Adding an already existing block BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1084 2012-03-15 13:26:27,320 INFO datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:handleScanFailure(301)) - Reporting bad block BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083 2012-03-15 13:26:27,321 INFO DataNode.clienttrace (BlockReceiver.java:run(1062)) - src: /127.0.0.1:54170, dest: /127.0.0.1:54083, bytes: 84992, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_2012624116_1, offset: 0, srvID: DS-600201831-10.10.10.105-54083-1331843
          Hide
          Tsz Wo Nicholas Sze added a comment -

          It looks like that the BlockPoolSliceScanner incorrectly makes the replica as corrupted.

          2012-03-15 13:26:27,317 WARN  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:verifyBlock(419)) - First Verification failed for BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083
          java.io.IOException: Stream closed
          	at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:134)
          	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
          	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
          	at java.io.DataInputStream.readShort(DataInputStream.java:295)
          	at org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:78)
          	at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:228)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:378)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:463)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:594)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:570)
          	at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:95)
          	at java.lang.Thread.run(Thread.java:680)
          2012-03-15 13:26:27,320 WARN  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:verifyBlock(419)) - Second Verification failed for BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083
          java.io.IOException: Stream closed
          	at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:134)
          	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
          	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
          	at java.io.DataInputStream.readShort(DataInputStream.java:295)
          	at org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:78)
          	at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:228)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:378)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:463)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:594)
          	at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:570)
          	at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:95)
          	at java.lang.Thread.run(Thread.java:680)
          2012-03-15 13:26:27,320 WARN  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:addBlock(234)) - Adding an already existing block BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1084
          2012-03-15 13:26:27,320 INFO  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:handleScanFailure(301)) - Reporting bad block BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083
          2012-03-15 13:26:27,321 INFO  DataNode.clienttrace (BlockReceiver.java:run(1062)) - src: /127.0.0.1:54170, dest: /127.0.0.1:54083, bytes: 84992, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_2012624116_1, offset: 0, srvID: DS-600201831-10.10.10.105-54083-1331843
          
          Show
          Tsz Wo Nicholas Sze added a comment - It looks like that the BlockPoolSliceScanner incorrectly makes the replica as corrupted. 2012-03-15 13:26:27,317 WARN datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:verifyBlock(419)) - First Verification failed for BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083 java.io.IOException: Stream closed at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:134) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:237) at java.io.DataInputStream.readShort(DataInputStream.java:295) at org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:78) at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:228) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:378) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:463) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:594) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:570) at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:95) at java.lang.Thread.run(Thread.java:680) 2012-03-15 13:26:27,320 WARN datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:verifyBlock(419)) - Second Verification failed for BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083 java.io.IOException: Stream closed at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:134) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:237) at java.io.DataInputStream.readShort(DataInputStream.java:295) at org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:78) at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:228) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:378) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:463) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:594) at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:570) at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:95) at java.lang.Thread.run(Thread.java:680) 2012-03-15 13:26:27,320 WARN datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:addBlock(234)) - Adding an already existing block BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1084 2012-03-15 13:26:27,320 INFO datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:handleScanFailure(301)) - Reporting bad block BP-426067686-10.10.10.105-1331843180223:blk_-951537730291424878_1083 2012-03-15 13:26:27,321 INFO DataNode.clienttrace (BlockReceiver.java:run(1062)) - src: /127.0.0.1:54170, dest: /127.0.0.1:54083, bytes: 84992, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_2012624116_1, offset: 0, srvID: DS-600201831-10.10.10.105-54083-1331843
          Hide
          Zhanwei Wang added a comment -

          I also run this test on hdfs 1.0.1 and the script finished successfully, but I found some strange things in datanode log.
          1) I also got "DataBlockScanner: Verification failed". Since my hdfs is a single node cluster and running on local network, local disk and everything should not fail at all.
          2) I got lots of "DataNode: Client calls recoverBlock" and I have no idea what happened.

          2012-03-15 22:43:33,572 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_3505027625176300242_3086 src: /127.0.0.1:63879 dest: /127.0.0.1:50010
          2012-03-15 22:43:33,590 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification failed for blk_8647935099647661204_1101. Its ok since it not in datanode dataset anymore.
          2012-03-15 22:43:33,659 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reopen Block for append blk_3505027625176300242_3086
          2012-03-15 22:43:33,662 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: setBlockPosition trying to set position to 275712 for block blk_3505027625176300242_3086 which is not a multiple of bytesPerChecksum 512
          2012-03-15 22:43:33,662 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: computePartialChunkCrc sizePartialChunk 256 block blk_3505027625176300242_3086 offset in block 275456 offset in metafile 2159
          2012-03-15 22:43:33,662 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Read in partial CRC chunk from disk for block blk_3505027625176300242_3086
          2012-03-15 22:43:33,664 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:63879, dest: /127.0.0.1:50010, bytes: 307712, op: HDFS_WRITE, cliID: DFSClient_110493321, offset: 0, srvID: DS-1576952563-10.64.55.158-50010-1331870286875, blockid: blk_3505027625176300242_3086, duration: 2816000
          2012-03-15 22:43:33,664 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_3505027625176300242_3086 terminating
          2012-03-15 22:43:33,677 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Client calls recoverBlock(block=blk_3505027625176300242_3086, targets=[127.0.0.1:50010])
          
          Show
          Zhanwei Wang added a comment - I also run this test on hdfs 1.0.1 and the script finished successfully, but I found some strange things in datanode log. 1) I also got "DataBlockScanner: Verification failed". Since my hdfs is a single node cluster and running on local network, local disk and everything should not fail at all. 2) I got lots of "DataNode: Client calls recoverBlock" and I have no idea what happened. 2012-03-15 22:43:33,572 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_3505027625176300242_3086 src: /127.0.0.1:63879 dest: /127.0.0.1:50010 2012-03-15 22:43:33,590 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification failed for blk_8647935099647661204_1101. Its ok since it not in datanode dataset anymore. 2012-03-15 22:43:33,659 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reopen Block for append blk_3505027625176300242_3086 2012-03-15 22:43:33,662 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: setBlockPosition trying to set position to 275712 for block blk_3505027625176300242_3086 which is not a multiple of bytesPerChecksum 512 2012-03-15 22:43:33,662 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: computePartialChunkCrc sizePartialChunk 256 block blk_3505027625176300242_3086 offset in block 275456 offset in metafile 2159 2012-03-15 22:43:33,662 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Read in partial CRC chunk from disk for block blk_3505027625176300242_3086 2012-03-15 22:43:33,664 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:63879, dest: /127.0.0.1:50010, bytes: 307712, op: HDFS_WRITE, cliID: DFSClient_110493321, offset: 0, srvID: DS-1576952563-10.64.55.158-50010-1331870286875, blockid: blk_3505027625176300242_3086, duration: 2816000 2012-03-15 22:43:33,664 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_3505027625176300242_3086 terminating 2012-03-15 22:43:33,677 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Client calls recoverBlock(block=blk_3505027625176300242_3086, targets=[127.0.0.1:50010])
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Brandon found out that this was broken by HDFS-3082.

          Show
          Tsz Wo Nicholas Sze added a comment - Brandon found out that this was broken by HDFS-3082 .
          Hide
          Brandon Li added a comment -

          HDFS-2525 fixed the race condition in 0.23.2 but 0.23.1 still has this problem.

          Show
          Brandon Li added a comment - HDFS-2525 fixed the race condition in 0.23.2 but 0.23.1 still has this problem.
          Hide
          Brandon Li added a comment -

          Attached patch for the trunk.

          Show
          Brandon Li added a comment - Attached patch for the trunk.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12518984/HDFS-3100.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2036//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12518984/HDFS-3100.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 11 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2036//console This message is automatically generated.
          Hide
          Brandon Li added a comment -

          Regenerate the patch from the right directory.

          Show
          Brandon Li added a comment - Regenerate the patch from the right directory.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12519100/HDFS-3100.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hdfs.TestCrcCorruption
          org.apache.hadoop.hdfs.server.namenode.TestFsck
          org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp
          org.apache.hadoop.hdfs.TestClientBlockVerification
          org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
          org.apache.hadoop.hdfs.TestDatanodeBlockScanner
          org.apache.hadoop.hdfs.TestDataTransferProtocol
          org.apache.hadoop.hdfs.TestDFSShell
          org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
          org.apache.hadoop.hdfs.TestDFSClientRetries

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2040//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2040//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12519100/HDFS-3100.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 11 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.TestCrcCorruption org.apache.hadoop.hdfs.server.namenode.TestFsck org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp org.apache.hadoop.hdfs.TestClientBlockVerification org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks org.apache.hadoop.hdfs.TestDatanodeBlockScanner org.apache.hadoop.hdfs.TestDataTransferProtocol org.apache.hadoop.hdfs.TestDFSShell org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks org.apache.hadoop.hdfs.TestDFSClientRetries +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2040//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2040//console This message is automatically generated.
          Hide
          Brandon Li added a comment -

          Previous patch missed one condition and didn't send checksum to the client and thus real corruption couldn't be detected. This problem is fixed in the new patch.

          Show
          Brandon Li added a comment - Previous patch missed one condition and didn't send checksum to the client and thus real corruption couldn't be detected. This problem is fixed in the new patch.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Brandon, I think we could avoid the metaFileExists(..) and, as you mentioned, there is a race condition between two calls.

          Show
          Tsz Wo Nicholas Sze added a comment - Brandon, I think we could avoid the metaFileExists(..) and, as you mentioned, there is a race condition between two calls.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12519162/HDFS-3100.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The patch appears to cause tar ant target to fail.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed the unit tests build

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2056//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2056//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12519162/HDFS-3100.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 11 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause tar ant target to fail. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed the unit tests build +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2056//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2056//console This message is automatically generated.
          Hide
          Brandon Li added a comment -

          Incorporated Nicholas' last comments.
          Added comments for the operation matrix of (corruptChecksumOK, meta_file_exist).

          Show
          Brandon Li added a comment - Incorporated Nicholas' last comments. Added comments for the operation matrix of (corruptChecksumOK, meta_file_exist).
          Hide
          Brandon Li added a comment -

          Incorporated Nicholas' last comment.
          Added comments for the operation matrix of (corruptChecksumOK, meta_file_exist).

          Show
          Brandon Li added a comment - Incorporated Nicholas' last comment. Added comments for the operation matrix of (corruptChecksumOK, meta_file_exist).
          Hide
          Tsz Wo Nicholas Sze added a comment -

          +1 patch looks good. Thanks for fixing this.

          Show
          Tsz Wo Nicholas Sze added a comment - +1 patch looks good. Thanks for fixing this.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12519338/HDFS-3100.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in .

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2063//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2063//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12519338/HDFS-3100.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 11 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2063//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2063//console This message is automatically generated.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          I have committed this. Thanks, Brandon!

          Show
          Tsz Wo Nicholas Sze added a comment - I have committed this. Thanks, Brandon!
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12519338/HDFS-3100.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in .

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2064//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2064//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12519338/HDFS-3100.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 11 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2064//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2064//console This message is automatically generated.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #1987 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1987/)
          HDFS-3100. In BlockSender, throw an exception when it needs to verify checksum but the meta data does not exist. Contributed by Brandon Li (Revision 1303628)

          Result = SUCCESS
          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303628
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #1987 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1987/ ) HDFS-3100 . In BlockSender, throw an exception when it needs to verify checksum but the meta data does not exist. Contributed by Brandon Li (Revision 1303628) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303628 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #1913 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1913/)
          HDFS-3100. In BlockSender, throw an exception when it needs to verify checksum but the meta data does not exist. Contributed by Brandon Li (Revision 1303628)

          Result = SUCCESS
          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303628
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #1913 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1913/ ) HDFS-3100 . In BlockSender, throw an exception when it needs to verify checksum but the meta data does not exist. Contributed by Brandon Li (Revision 1303628) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303628 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-0.23-Commit #703 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/703/)
          svn merge -c 1303628 from trunk for HDFS-3100. (Revision 1303629)

          Result = SUCCESS
          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303629
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Commit #703 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/703/ ) svn merge -c 1303628 from trunk for HDFS-3100 . (Revision 1303629) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303629 Files : /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-0.23-Commit #712 (See https://builds.apache.org/job/Hadoop-Common-0.23-Commit/712/)
          svn merge -c 1303628 from trunk for HDFS-3100. (Revision 1303629)

          Result = SUCCESS
          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303629
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-0.23-Commit #712 (See https://builds.apache.org/job/Hadoop-Common-0.23-Commit/712/ ) svn merge -c 1303628 from trunk for HDFS-3100 . (Revision 1303629) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303629 Files : /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #1922 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1922/)
          HDFS-3100. In BlockSender, throw an exception when it needs to verify checksum but the meta data does not exist. Contributed by Brandon Li (Revision 1303628)

          Result = ABORTED
          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303628
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #1922 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1922/ ) HDFS-3100 . In BlockSender, throw an exception when it needs to verify checksum but the meta data does not exist. Contributed by Brandon Li (Revision 1303628) Result = ABORTED szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303628 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-0.23-Commit #720 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/720/)
          svn merge -c 1303628 from trunk for HDFS-3100. (Revision 1303629)

          Result = ABORTED
          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303629
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-0.23-Commit #720 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/720/ ) svn merge -c 1303628 from trunk for HDFS-3100 . (Revision 1303629) Result = ABORTED szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303629 Files : /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #992 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/992/)
          HDFS-3100. In BlockSender, throw an exception when it needs to verify checksum but the meta data does not exist. Contributed by Brandon Li (Revision 1303628)

          Result = FAILURE
          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303628
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #992 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/992/ ) HDFS-3100 . In BlockSender, throw an exception when it needs to verify checksum but the meta data does not exist. Contributed by Brandon Li (Revision 1303628) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303628 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-0.23-Build #205 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/205/)
          svn merge -c 1303628 from trunk for HDFS-3100. (Revision 1303629)

          Result = SUCCESS
          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303629
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #205 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/205/ ) svn merge -c 1303628 from trunk for HDFS-3100 . (Revision 1303629) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303629 Files : /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-0.23-Build #233 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/233/)
          svn merge -c 1303628 from trunk for HDFS-3100. (Revision 1303629)

          Result = FAILURE
          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303629
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-0.23-Build #233 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/233/ ) svn merge -c 1303628 from trunk for HDFS-3100 . (Revision 1303629) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303629 Files : /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #1027 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1027/)
          HDFS-3100. In BlockSender, throw an exception when it needs to verify checksum but the meta data does not exist. Contributed by Brandon Li (Revision 1303628)

          Result = SUCCESS
          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303628
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1027 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1027/ ) HDFS-3100 . In BlockSender, throw an exception when it needs to verify checksum but the meta data does not exist. Contributed by Brandon Li (Revision 1303628) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1303628 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Revised summary.

          Show
          Tsz Wo Nicholas Sze added a comment - Revised summary.

            People

            • Assignee:
              Brandon Li
              Reporter:
              Zhanwei Wang
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development