Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 2.0.0-alpha
    • Fix Version/s: 0.23.3, 2.0.2-alpha
    • Component/s: namenode
    • Labels:
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      When upgrading from 1.x to 2.0.0, the SecondaryNameNode can fail to start up:

      2012-06-16 09:52:33,812 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
      java.io.IOException: Inconsistent checkpoint fields.
      LV = -40 namespaceID = 64415959 cTime = 1339813974990 ; clusterId = CID-07a82b97-8d04-4fdd-b3a1-f40650163245 ; blockpoolId = BP-1792677198-172.29.121.67-1339813967723.
      Expecting respectively: -19; 64415959; 0; ; .
      at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:120)
      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:454)
      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:334)
      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$2.run(SecondaryNameNode.java:301)
      at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:438)
      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:297)
      at java.lang.Thread.run(Thread.java:662)
      

      The error check we're hitting came from HDFS-1073, and it's intended to verify that we're connecting to the correct NN. But the check is too strict and considers "different metadata version" to be the same as "different clusterID".

      I believe the check in doCheckpoint simply needs to explicitly check for and handle the update case.

      1. hdfs-3597.txt
        8 kB
        Andy Isaacson
      2. hdfs-3597-2.txt
        7 kB
        Andy Isaacson
      3. hdfs-3597-3.txt
        8 kB
        Andy Isaacson
      4. hdfs-3597-4.txt
        8 kB
        Andy Isaacson

        Activity

        Andy Isaacson created issue -
        Andy Isaacson made changes -
        Field Original Value New Value
        Project Hadoop Common [ 12310240 ] Hadoop HDFS [ 12310942 ]
        Key HADOOP-8553 HDFS-3597
        Affects Version/s 2.0.0-alpha [ 12320353 ]
        Affects Version/s 2.0.0-alpha [ 12320352 ]
        Target Version/s 2.0.1-alpha [ 12321441 ]
        Hide
        Andy Isaacson added a comment -

        Not sure how this got created as a HADOOP- bug, I thought I specified HDFS. Pilot error I'm sure. Fixed now.

        Show
        Andy Isaacson added a comment - Not sure how this got created as a HADOOP- bug, I thought I specified HDFS. Pilot error I'm sure. Fixed now.
        Hide
        Andy Isaacson added a comment -

        Attaching proposed fix, including positive and negative test cases showing that the check functions as expected.

        Show
        Andy Isaacson added a comment - Attaching proposed fix, including positive and negative test cases showing that the check functions as expected.
        Andy Isaacson made changes -
        Attachment hdfs-3597.txt [ 12535025 ]
        Hide
        Todd Lipcon added a comment -
        +  public boolean storageVersionMatches(FSImage si) throws IOException {
        +    return (layoutVersion == si.getStorage().layoutVersion)
        +           && (cTime == si.getStorage().cTime);
        +  }
        +
        

        I think this should take StorageInfo as a parameter instead, and you would pass image.getStorage() in.


        +    if (checkpointImage.getNamespaceID() != 0 &&
        +        sig.storageVersionMatches(checkpointImage)) {
        +      // If the image actually has some data and the version matches, make sure
        +      // we're talking to the same NN as we did before.
               sig.validateStorageInfo(checkpointImage);
             } else {
        

        I'm not 100% convinced of the logic. I think we should always verify that it's the same NN – but just loosen the validateStorageInfo check here to not check the versioning info. For example, if I accidentally point my 2NN at the wrong NN, it won't start, even if that NN happens to be from a different version. It should only blow its local storage away if it's the same NN (namespace/cluster) but a different version.


        +  public void tweakSecondaryNameNodeProperty(String snndir, String prop, String val)
        

        Instead, can you use FSImageTestUtil.corruptVersionFile here? It seems to do the same thing – except it properly closes streams when it's done with them.


        +  @Before
        +  public void setupCluster() throws IOException {
        +  }
        +
        +  @After
        +  public void teardownCluster() throws IOException {
        +  }
        

        No need for these...?


        +    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
        

        Can you change this test to not need any datanodes? ie instead of writing a file to generate edits, just do a metadata op like mkdir. Then the test will run faster.


        • It seems odd that you print out all of the checkpoint dirs, but then only corrupt the property in one of them. Shouldn't you be corrupting it in all of them?
        • Whenever you start a minicluster, you should have the matching shutdown in a finally clause to avoid potentially leaking the cluster between test cases.
        • The spelling fix in NNStorage is unrelated. Cleanup's good, but try not to do so in files that aren't otherwise touched by your patch.
        Show
        Todd Lipcon added a comment - + public boolean storageVersionMatches(FSImage si) throws IOException { + return (layoutVersion == si.getStorage().layoutVersion) + && (cTime == si.getStorage().cTime); + } + I think this should take StorageInfo as a parameter instead, and you would pass image.getStorage() in. + if (checkpointImage.getNamespaceID() != 0 && + sig.storageVersionMatches(checkpointImage)) { + // If the image actually has some data and the version matches, make sure + // we're talking to the same NN as we did before. sig.validateStorageInfo(checkpointImage); } else { I'm not 100% convinced of the logic. I think we should always verify that it's the same NN – but just loosen the validateStorageInfo check here to not check the versioning info. For example, if I accidentally point my 2NN at the wrong NN, it won't start, even if that NN happens to be from a different version. It should only blow its local storage away if it's the same NN (namespace/cluster) but a different version. + public void tweakSecondaryNameNodeProperty( String snndir, String prop, String val) Instead, can you use FSImageTestUtil.corruptVersionFile here? It seems to do the same thing – except it properly closes streams when it's done with them. + @Before + public void setupCluster() throws IOException { + } + + @After + public void teardownCluster() throws IOException { + } No need for these...? + cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build(); Can you change this test to not need any datanodes? ie instead of writing a file to generate edits, just do a metadata op like mkdir. Then the test will run faster. It seems odd that you print out all of the checkpoint dirs, but then only corrupt the property in one of them. Shouldn't you be corrupting it in all of them? Whenever you start a minicluster, you should have the matching shutdown in a finally clause to avoid potentially leaking the cluster between test cases. The spelling fix in NNStorage is unrelated. Cleanup's good, but try not to do so in files that aren't otherwise touched by your patch.
        Hide
        Andy Isaacson added a comment -

        I think this should take StorageInfo as a parameter instead, and you would pass image.getStorage() in.

        Sounds good, thanks.

        I'm not 100% convinced of the logic. I think we should always verify that it's the same NN – but just loosen the validateStorageInfo check here to not check the versioning info. For example, if I accidentally point my 2NN at the wrong NN, it won't start, even if that NN happens to be from a different version. It should only blow its local storage away if it's the same NN (namespace/cluster) but a different version.

        Fair enough, but we don't want to loosen the check in validateStorageInfo because it's used in a half dozen other places that want full checking I think. I'll refactor the checks.

        Instead, can you use FSImageTestUtil.corruptVersionFile here?

        Great, didn't know about that!

        No need for these...?

        indeed, leftover from a previous test design.

        Can you change this test to not need any datanodes? ... mkdir

        A fine plan, done.

        It seems odd that you print out all of the checkpoint dirs, but then only corrupt the property in one of them. Shouldn't you be corrupting it in all of them?

        That's an issue I was confused about too. I don't understand why the test has multiple checkpoint dirs, nor why my 2NN is running in snn.getCheckpointDirs().get(1) rather than .get(0). (If I corrupt the first checkpointdir, there is no perceptible effect on the testcase.) The println is a leftover from when I was still attempting to exercise the upgrade code.

        The spelling fix in NNStorage is unrelated. Cleanup's good, but try not to do so in files that aren't otherwise touched by your patch.

        Dropped. At some point during development my fix touched NNstorage.

        Show
        Andy Isaacson added a comment - I think this should take StorageInfo as a parameter instead, and you would pass image.getStorage() in. Sounds good, thanks. I'm not 100% convinced of the logic. I think we should always verify that it's the same NN – but just loosen the validateStorageInfo check here to not check the versioning info. For example, if I accidentally point my 2NN at the wrong NN, it won't start, even if that NN happens to be from a different version. It should only blow its local storage away if it's the same NN (namespace/cluster) but a different version. Fair enough, but we don't want to loosen the check in validateStorageInfo because it's used in a half dozen other places that want full checking I think. I'll refactor the checks. Instead, can you use FSImageTestUtil.corruptVersionFile here? Great, didn't know about that! No need for these...? indeed, leftover from a previous test design. Can you change this test to not need any datanodes? ... mkdir A fine plan, done. It seems odd that you print out all of the checkpoint dirs, but then only corrupt the property in one of them. Shouldn't you be corrupting it in all of them? That's an issue I was confused about too. I don't understand why the test has multiple checkpoint dirs, nor why my 2NN is running in snn.getCheckpointDirs().get(1) rather than .get(0). (If I corrupt the first checkpointdir, there is no perceptible effect on the testcase.) The println is a leftover from when I was still attempting to exercise the upgrade code. The spelling fix in NNStorage is unrelated. Cleanup's good, but try not to do so in files that aren't otherwise touched by your patch. Dropped. At some point during development my fix touched NNstorage.
        Hide
        Andy Isaacson added a comment -

        Attaching new version of patch that addresses review comments. Please check the doCheckpoint logic specifically, I'm happy with this refactoring but am open to better suggestions.

        Running a full set of tests locally to verify no breakage.

        Show
        Andy Isaacson added a comment - Attaching new version of patch that addresses review comments. Please check the doCheckpoint logic specifically, I'm happy with this refactoring but am open to better suggestions. Running a full set of tests locally to verify no breakage.
        Andy Isaacson made changes -
        Attachment hdfs-3597-2.txt [ 12535272 ]
        Hide
        Todd Lipcon added a comment -

        That's an issue I was confused about too. I don't understand why the test has multiple checkpoint dirs, nor why my 2NN is running in snn.getCheckpointDirs().get(1) rather than .get(0). (If I corrupt the first checkpointdir, there is no perceptible effect on the testcase.) The println is a leftover from when I was still attempting to exercise the upgrade code.

        The 2NN can be configured with multiple directories. Our tests make use of that feature:

                conf.set(DFS_NAMENODE_CHECKPOINT_DIR_KEY,
                    fileAsURI(new File(base_dir, "namesecondary" + (2*nnIndex + 1)))+","+
                    fileAsURI(new File(base_dir, "namesecondary" + (2*nnIndex + 2))));
        

        (from MiniDFSCluster source)

        I bet we have some bug/feature whereby if only one of the two is corrupted, the behavior depends on which of the two it was. My guess is we iterate over each of the dirs during startup, and load the properties from each, so it's the last one which takes precedence by the time we get to the version checking code. Might be worth fixing this in a separate JIRA (out of scope for this one)

        Given the above, I think it makes sense to edit the VERSION file in both of those directories, though, since you're basically depending on some other bug in this test case currently.

        Will look at your new patch later this afternoon.

        Show
        Todd Lipcon added a comment - That's an issue I was confused about too. I don't understand why the test has multiple checkpoint dirs, nor why my 2NN is running in snn.getCheckpointDirs().get(1) rather than .get(0). (If I corrupt the first checkpointdir, there is no perceptible effect on the testcase.) The println is a leftover from when I was still attempting to exercise the upgrade code. The 2NN can be configured with multiple directories. Our tests make use of that feature: conf.set(DFS_NAMENODE_CHECKPOINT_DIR_KEY, fileAsURI( new File(base_dir, "namesecondary" + (2*nnIndex + 1)))+ "," + fileAsURI( new File(base_dir, "namesecondary" + (2*nnIndex + 2)))); (from MiniDFSCluster source) I bet we have some bug/feature whereby if only one of the two is corrupted, the behavior depends on which of the two it was. My guess is we iterate over each of the dirs during startup, and load the properties from each, so it's the last one which takes precedence by the time we get to the version checking code. Might be worth fixing this in a separate JIRA (out of scope for this one) Given the above, I think it makes sense to edit the VERSION file in both of those directories, though, since you're basically depending on some other bug in this test case currently. Will look at your new patch later this afternoon.
        Hide
        Andy Isaacson added a comment -

        The 2NN can be configured with multiple directories.

        Thanks for the explanation, that's very enlightening. Looking at the results now.

        Show
        Andy Isaacson added a comment - The 2NN can be configured with multiple directories. Thanks for the explanation, that's very enlightening. Looking at the results now.
        Hide
        Andy Isaacson added a comment -

        Address review feedback and adjust test to more accurately test the upgrade scenario.

        1. we now corrupt all 2NN directories
        2. we now test upgrade from -39 which fixes some unexplained test failures
        3. clean up the test
        4. drop the datanodes and use mkdir instead of writefile for quicker test startup.
        Show
        Andy Isaacson added a comment - Address review feedback and adjust test to more accurately test the upgrade scenario. we now corrupt all 2NN directories we now test upgrade from -39 which fixes some unexplained test failures clean up the test drop the datanodes and use mkdir instead of writefile for quicker test startup.
        Andy Isaacson made changes -
        Attachment hdfs-3597-3.txt [ 12535505 ]
        Aaron T. Myers made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Target Version/s 2.0.1-alpha [ 12321440 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12535505/hdfs-3597-3.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        -1 findbugs. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2768//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2768//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2768//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12535505/hdfs-3597-3.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2768//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2768//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2768//console This message is automatically generated.
        Hide
        Andy Isaacson added a comment -

        the findbugs appears to be spurious. If I'm reading it right, it's complaining about synchronization of BlockTokenSecretManager.keyUpdateInterval.

        Show
        Andy Isaacson added a comment - the findbugs appears to be spurious. If I'm reading it right, it's complaining about synchronization of BlockTokenSecretManager.keyUpdateInterval.
        Hide
        Aaron T. Myers added a comment -

        I agree those warnings are spurious. Those findbugs warnings have already been addressed by HDFS-3615.

        Show
        Aaron T. Myers added a comment - I agree those warnings are spurious. Those findbugs warnings have already been addressed by HDFS-3615 .
        Hide
        Todd Lipcon added a comment -

        A few style nits:

        +  public boolean storageVersionMatches(StorageInfo si) throws IOException {
        

        Can this be package-private?


        +  boolean sameCluster(FSImage si) {
        

        Rename to isSameCluster


        • Nit: in several places you have the || or && operators at the beginning of a line of a multi-line condition. Our style in most of the code base is to put these at the end of the prior line (even though validateStorageInfo was previously doing it at the start of line)

           void validateStorageInfo(FSImage si) throws IOException {
        -    if(layoutVersion != si.getStorage().layoutVersion
        -       || namespaceID != si.getStorage().namespaceID 
        -       || cTime != si.getStorage().cTime
        -       || !clusterID.equals(si.getClusterID())
        -       || !blockpoolID.equals(si.getBlockPoolID())) {
        +    if (!sameCluster(si)
        +        || layoutVersion != si.getStorage().layoutVersion) {
        

        In an earlier comment you mentioned not wanting to change the behavior of validateStorageInfo. But I think here you lost the check on layoutVersion. Should this be !sameCluster(si) || !storageVersionMatches(si.getStorage()) instead, to maintain the old behavior?


        • In the new test case, add a reference to this JIRA - e.g Regression test for HDFS-3597. So if someone is unsure about the logic, it's easy to understand why the test was put there.
        • Style: rename versfiles to versionFiles for clarity.
        • You can reduce the scope of a lot of your variables inside doIt
        Show
        Todd Lipcon added a comment - A few style nits: + public boolean storageVersionMatches(StorageInfo si) throws IOException { Can this be package-private? + boolean sameCluster(FSImage si) { Rename to isSameCluster Nit: in several places you have the || or && operators at the beginning of a line of a multi-line condition. Our style in most of the code base is to put these at the end of the prior line (even though validateStorageInfo was previously doing it at the start of line) void validateStorageInfo(FSImage si) throws IOException { - if (layoutVersion != si.getStorage().layoutVersion - || namespaceID != si.getStorage().namespaceID - || cTime != si.getStorage().cTime - || !clusterID.equals(si.getClusterID()) - || !blockpoolID.equals(si.getBlockPoolID())) { + if (!sameCluster(si) + || layoutVersion != si.getStorage().layoutVersion) { In an earlier comment you mentioned not wanting to change the behavior of validateStorageInfo. But I think here you lost the check on layoutVersion. Should this be !sameCluster(si) || !storageVersionMatches(si.getStorage()) instead, to maintain the old behavior? In the new test case, add a reference to this JIRA - e.g Regression test for HDFS-3597 . So if someone is unsure about the logic, it's easy to understand why the test was put there. Style: rename versfiles to versionFiles for clarity. You can reduce the scope of a lot of your variables inside doIt
        Hide
        Andy Isaacson added a comment -

        package-private

        done.

        isSameCluster

        done.

        || and && at EOL instead of beginning

        done.

        But I think here you lost the check on layoutVersion

        Verified with todd that he meant to say cTime here. The change was unintentional, fixed.

        test comments, names, scopes

        done.

        TFTR!

        Show
        Andy Isaacson added a comment - package-private done. isSameCluster done. || and && at EOL instead of beginning done. But I think here you lost the check on layoutVersion Verified with todd that he meant to say cTime here. The change was unintentional, fixed. test comments, names, scopes done. TFTR!
        Hide
        Andy Isaacson added a comment -

        Attaching hdfs-3597-4.txt, addressing review feedback.

        Show
        Andy Isaacson added a comment - Attaching hdfs-3597-4.txt, addressing review feedback.
        Andy Isaacson made changes -
        Attachment hdfs-3597-4.txt [ 12537109 ]
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12537109/hdfs-3597-4.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2859//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2859//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12537109/hdfs-3597-4.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2859//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2859//console This message is automatically generated.
        Hide
        Todd Lipcon added a comment -

        +1, lgtm. Thanks Andy. I'll commit this momentarily.

        Show
        Todd Lipcon added a comment - +1, lgtm. Thanks Andy. I'll commit this momentarily.
        Todd Lipcon made changes -
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Hadoop Flags Reviewed [ 10343 ]
        Fix Version/s 3.0.0 [ 12320356 ]
        Fix Version/s 2.2.0-alpha [ 12322472 ]
        Resolution Fixed [ 1 ]
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk-Commit #2571 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2571/)
        HDFS-3597. SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1363899)

        Result = SUCCESS
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1363899
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2571 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2571/ ) HDFS-3597 . SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1363899) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1363899 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Common-trunk-Commit #2506 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2506/)
        HDFS-3597. SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1363899)

        Result = SUCCESS
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1363899
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Show
        Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #2506 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2506/ ) HDFS-3597 . SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1363899) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1363899 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk-Commit #2527 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2527/)
        HDFS-3597. SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1363899)

        Result = FAILURE
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1363899
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #2527 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2527/ ) HDFS-3597 . SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1363899) Result = FAILURE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1363899 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk #1111 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1111/)
        HDFS-3597. SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1363899)

        Result = FAILURE
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1363899
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1111 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1111/ ) HDFS-3597 . SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1363899) Result = FAILURE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1363899 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #1143 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1143/)
        HDFS-3597. SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1363899)

        Result = FAILURE
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1363899
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1143 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1143/ ) HDFS-3597 . SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1363899) Result = FAILURE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1363899 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Hide
        Daryn Sharp added a comment -

        I've committed to 23.

        Show
        Daryn Sharp added a comment - I've committed to 23.
        Daryn Sharp made changes -
        Fix Version/s 0.23.3 [ 12320052 ]
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-0.23-Build #344 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/344/)
        svn merge -c 1363899 FIXES: HDFS-3597. SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1372886)

        Result = SUCCESS
        daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372886
        Files :

        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #344 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/344/ ) svn merge -c 1363899 FIXES: HDFS-3597 . SNN fails to start after DFS upgrade. Contributed by Andy Isaacson. (Revision 1372886) Result = SUCCESS daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372886 Files : /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointSignature.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryNameNodeUpgrade.java
        Arun C Murthy made changes -
        Fix Version/s 3.0.0 [ 12320356 ]
        Harsh J made changes -
        Labels upgrade
        Harsh J made changes -
        Component/s name-node [ 12312926 ]
        Arun C Murthy made changes -
        Status Resolved [ 5 ] Closed [ 6 ]

          People

          • Assignee:
            Andy Isaacson
            Reporter:
            Andy Isaacson
          • Votes:
            0 Vote for this issue
            Watchers:
            11 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development