Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-5438

Flaws in block report processing can cause data loss

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 0.23.9, 2.2.0
    • Fix Version/s: 3.0.0, 0.23.10, 2.3.0
    • Component/s: namenode
    • Labels:
      None

      Description

      The incremental block reports from data nodes and block commits are asynchronous. This becomes troublesome when the gen stamp for a block is changed during a write pipeline recovery.

      • If an incremental block report is delayed from a node but NN had enough replicas already, a report with the old gen stamp may be received after block completion. This replica will be correctly marked corrupt. But if the node had participated in the pipeline recovery, a new (delayed) report with the correct gen stamp will come soon. However, this report won't have any effect on the corrupt state of the replica.
      • If block reports are received while the block is still under construction (i.e. client's call to make block committed has not been received by NN), they are blindly accepted regardless of the gen stamp. If a failed node reports in with the old gen stamp while pipeline recovery is on-going, it will be accepted and counted as valid during commit of the block.

      Due to the above two problems, correct replicas can be marked corrupt and corrupt replicas can be accepted during commit. So far we have observed two cases in production.

      • The client hangs forever to close a file. All replicas are marked corrupt.
      • After the successful close of a file, read fails. Corrupt replicas are accepted during commit and valid replicas are marked corrupt afterward.
      1. HDFS-5438.trunk.patch
        17 kB
        Kihwal Lee
      2. HDFS-5438-1.trunk.patch
        17 kB
        Kihwal Lee
      3. HDFS-5438-2.trunk.patch
        17 kB
        Kihwal Lee
      4. HDFS-5438-3.trunk.patch
        16 kB
        Kihwal Lee
      5. HDFS-5438-4.branch-0.23.patch
        21 kB
        Kihwal Lee
      6. HDFS-5438-4.trunk.patch
        21 kB
        Kihwal Lee

        Activity

        Hide
        Kihwal Lee added a comment -

        This is a high-level description of what the patch does.

        • The patch makes NN save the list of already reported replicas when starting a pipeline recovery. If a new report with the new gen stamp is not received for the existing replica until the recovery is done, it will be marked corrupt.
        • If a block report is received for existing corrupt replica and it is no longer corrupt, NN will remove it from the corrupt replicas map.
        • If client cannot close a file because the block does not have enough number of valid replicas, it eventually gives up rather than hanging forever. It is already failing after a number of retries when adding a new block. It will use the same retry limit in compleFile(), but the timeout will double every time to make it try harder. With the default of 5 retries, a client will wait at least 4 minutes and give up. If NN is not responding, it may wait longer.
        Show
        Kihwal Lee added a comment - This is a high-level description of what the patch does. The patch makes NN save the list of already reported replicas when starting a pipeline recovery. If a new report with the new gen stamp is not received for the existing replica until the recovery is done, it will be marked corrupt. If a block report is received for existing corrupt replica and it is no longer corrupt, NN will remove it from the corrupt replicas map. If client cannot close a file because the block does not have enough number of valid replicas, it eventually gives up rather than hanging forever. It is already failing after a number of retries when adding a new block. It will use the same retry limit in compleFile(), but the timeout will double every time to make it try harder. With the default of 5 retries, a client will wait at least 4 minutes and give up. If NN is not responding, it may wait longer.
        Hide
        Kihwal Lee added a comment -

        Oops. The critical line is commented out for testing in the patch.

        Show
        Kihwal Lee added a comment - Oops. The critical line is commented out for testing in the patch.
        Hide
        Kihwal Lee added a comment -

        Reposting the patch.

        Show
        Kihwal Lee added a comment - Reposting the patch.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12610678/HDFS-5438.trunk.patch
        against trunk revision .

        -1 patch. Trunk compilation may be broken.

        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5298//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12610678/HDFS-5438.trunk.patch against trunk revision . -1 patch . Trunk compilation may be broken. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5298//console This message is automatically generated.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12610678/HDFS-5438.trunk.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        -1 javac. The patch appears to cause the build to fail.

        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5297//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12610678/HDFS-5438.trunk.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. -1 javac . The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5297//console This message is automatically generated.
        Hide
        Kihwal Lee added a comment -

        The new patch adds check for gen stamp in the case where the stored block state is UNDER_CONSTRUCTION and reported replica state is FINALIZED.

        Show
        Kihwal Lee added a comment - The new patch adds check for gen stamp in the case where the stored block state is UNDER_CONSTRUCTION and reported replica state is FINALIZED.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12610696/HDFS-5438.trunk.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
        org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5299//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5299//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12610696/HDFS-5438.trunk.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5299//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5299//console This message is automatically generated.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12610713/HDFS-5438-1.trunk.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp
        org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5300//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5300//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12610713/HDFS-5438-1.trunk.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5300//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5300//console This message is automatically generated.
        Hide
        Kihwal Lee added a comment -

        The strict acceptance check during pipline recovery fails in HA failover, because new active node does not know about recoveries until client calls updatePipeline(). I will have BlockInfoUnderConstruction remember reported replica's gen stamp as well and pick the right ones during updatePipeline() or commit of the block.

        Show
        Kihwal Lee added a comment - The strict acceptance check during pipline recovery fails in HA failover, because new active node does not know about recoveries until client calls updatePipeline(). I will have BlockInfoUnderConstruction remember reported replica's gen stamp as well and pick the right ones during updatePipeline() or commit of the block.
        Hide
        Kihwal Lee added a comment -

        The new patch records reported gen stamp in the replica list of the BlockInfoUnderConstruction. On commit or updatePipeline(), replicas with a wrong gen stamp are removed.

        As a refinement to block "uncorrupt" feature, blocks are only uncorrupted if they were corrupted because of gen stamp mismatch. A delay is introduced in TestCorruptFilesJsp to ensure block reports don't uncorrupt replicas with data corruption.

        The added test case won't happen for real, but exercises the checks in place in order to avoid committing blocks with only corrupt replicas.

        Show
        Kihwal Lee added a comment - The new patch records reported gen stamp in the replica list of the BlockInfoUnderConstruction. On commit or updatePipeline(), replicas with a wrong gen stamp are removed. As a refinement to block "uncorrupt" feature, blocks are only uncorrupted if they were corrupted because of gen stamp mismatch. A delay is introduced in TestCorruptFilesJsp to ensure block reports don't uncorrupt replicas with data corruption. The added test case won't happen for real, but exercises the checks in place in order to avoid committing blocks with only corrupt replicas.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12610869/HDFS-5438-2.trunk.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 2 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.TestReadWhileWriting
        org.apache.hadoop.hdfs.TestFileAppend4
        org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
        org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

        The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
        org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
        org.apache.hadoop.hdfs.TestLeaseRecovery2

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5304//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5304//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12610869/HDFS-5438-2.trunk.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestReadWhileWriting org.apache.hadoop.hdfs.TestFileAppend4 org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache org.apache.hadoop.hdfs.TestLeaseRecovery2 +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5304//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5304//console This message is automatically generated.
        Hide
        Kihwal Lee added a comment -

        The test failures were caused by aggressive exclusion of stale replicas. In once place, the method in the patch was called before updating the gen stamp, so all good replicas were discarded. TestFileAppend4 was broken by this.

        The new patch should fix these problems.

        Show
        Kihwal Lee added a comment - The test failures were caused by aggressive exclusion of stale replicas. In once place, the method in the patch was called before updating the gen stamp, so all good replicas were discarded. TestFileAppend4 was broken by this. The new patch should fix these problems.
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12610987/HDFS-5438-3.trunk.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 2 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5312//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5312//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12610987/HDFS-5438-3.trunk.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5312//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5312//console This message is automatically generated.
        Hide
        Daryn Sharp added a comment -

        A suggestion in FSNamesystem: instead of invoking blockinfo.setGenerationStamp(gs) and then blockinfo.processRecordedReplicas(gs), perhaps you could create a single method like blockinfo.setGenerationStampAndVerifyReplicas(gs) that does both.

        Otherwise, the only nit is using a string prefix of "GS:" to determine the error. Perhaps adding an enum in BlockToMarkCorrupt to signify the reason will be cleaner.

        Show
        Daryn Sharp added a comment - A suggestion in FSNamesystem : instead of invoking blockinfo.setGenerationStamp(gs) and then blockinfo.processRecordedReplicas(gs) , perhaps you could create a single method like blockinfo.setGenerationStampAndVerifyReplicas(gs) that does both. Otherwise, the only nit is using a string prefix of "GS:" to determine the error. Perhaps adding an enum in BlockToMarkCorrupt to signify the reason will be cleaner.
        Hide
        Kihwal Lee added a comment -

        New patch attached.

        Show
        Kihwal Lee added a comment - New patch attached.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12613682/HDFS-5438-4.trunk.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 2 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5424//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5424//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12613682/HDFS-5438-4.trunk.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5424//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5424//console This message is automatically generated.
        Hide
        Kihwal Lee added a comment -

        The unit test failure is not related to the patch. It is being tracked by HADOOP-9587

        Show
        Kihwal Lee added a comment - The unit test failure is not related to the patch. It is being tracked by HADOOP-9587
        Hide
        Daryn Sharp added a comment -

        One question: In CorruptReplicasMap, the SortedSet/TreeSet is replaced with HashMap which lacks the key ordering done by the tree. Assuming the ordering was previously necessary, should it now be a SortedMap/TreeMap?

        Show
        Daryn Sharp added a comment - One question: In CorruptReplicasMap , the SortedSet/TreeSet is replaced with HashMap which lacks the key ordering done by the tree. Assuming the ordering was previously necessary, should it now be a SortedMap/TreeMap ?
        Hide
        Kihwal Lee added a comment -

        The top level data structure is still SortedMap. So the change was SortedSet/TreeSet to SortedSet/HashMap. The blocks needs to be in sorted order, since there is a method that returns a range of corrupt block IDs. But the data nodes associated with a block don't need to be sorted. In fact, sorted collection may perform poorly compared to HashMap, since the majority of the use case is to call contains() against it. The rest calls size(). So I see no reason to make node collection sorted.

        Show
        Kihwal Lee added a comment - The top level data structure is still SortedMap . So the change was SortedSet/TreeSet to SortedSet/HashMap . The blocks needs to be in sorted order, since there is a method that returns a range of corrupt block IDs. But the data nodes associated with a block don't need to be sorted. In fact, sorted collection may perform poorly compared to HashMap, since the majority of the use case is to call contains() against it. The rest calls size(). So I see no reason to make node collection sorted.
        Hide
        Daryn Sharp added a comment -

        +1 Makes sense to me.

        Show
        Daryn Sharp added a comment - +1 Makes sense to me.
        Hide
        Kihwal Lee added a comment -

        Attaching the branch-0.23 version of the patch. It needs a separate patch because markBlockAsCorrupt() does not take BlockToMarkCorrupt but expects individual fields of it being passed in 0.23. Otherwise the change is identical to the trunk version. Ran a number of tests to make sure it was ported correctly.

        precommit will fail on this.

        Show
        Kihwal Lee added a comment - Attaching the branch-0.23 version of the patch. It needs a separate patch because markBlockAsCorrupt() does not take BlockToMarkCorrupt but expects individual fields of it being passed in 0.23. Otherwise the change is identical to the trunk version. Ran a number of tests to make sure it was ported correctly. precommit will fail on this.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12613916/HDFS-5438-4.branch-0.23.patch
        against trunk revision .

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5436//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12613916/HDFS-5438-4.branch-0.23.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5436//console This message is automatically generated.
        Hide
        Kihwal Lee added a comment -

        Committed to trunk, branch-2 and branch-0.23. Thanks for the reviews, Daryn.

        Show
        Kihwal Lee added a comment - Committed to trunk, branch-2 and branch-0.23. Thanks for the reviews, Daryn.
        Hide
        Hudson added a comment -

        SUCCESS: Integrated in Hadoop-trunk-Commit #4742 (See https://builds.apache.org/job/Hadoop-trunk-Commit/4742/)
        HDFS-5438. Flaws in block report processing can cause data loss. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542054)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
        Show
        Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #4742 (See https://builds.apache.org/job/Hadoop-trunk-Commit/4742/ ) HDFS-5438 . Flaws in block report processing can cause data loss. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542054 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
        Hide
        Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Yarn-trunk #392 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/392/)
        HDFS-5438. Flaws in block report processing can cause data loss. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542054)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
        Show
        Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #392 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/392/ ) HDFS-5438 . Flaws in block report processing can cause data loss. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542054 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in Hadoop-Hdfs-0.23-Build #791 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/791/)
        HDFS-5438. Flaws in block report processing can cause data loss. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542058)

        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
        Show
        Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-0.23-Build #791 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/791/ ) HDFS-5438 . Flaws in block report processing can cause data loss. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542058 ) /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in Hadoop-Hdfs-trunk #1583 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1583/)
        HDFS-5438. Flaws in block report processing can cause data loss. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542054)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
        Show
        Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1583 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1583/ ) HDFS-5438 . Flaws in block report processing can cause data loss. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542054 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in Hadoop-Mapreduce-trunk #1609 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1609/)
        HDFS-5438. Flaws in block report processing can cause data loss. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542054)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
        Show
        Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1609 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1609/ ) HDFS-5438 . Flaws in block report processing can cause data loss. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1542054 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java

          People

          • Assignee:
            Kihwal Lee
            Reporter:
            Kihwal Lee
          • Votes:
            0 Vote for this issue
            Watchers:
            13 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development