Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-1969

Running rollback on new-version namenode destroys namespace

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.22.0
    • Fix Version/s: 0.22.0
    • Component/s: namenode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      The following sequence leaves the namespace in an inconsistent/broken state:

      • format NN using 0.20 (or any prior release, probably)
      • run hdfs namenode -upgrade on 0.22. ^C the NN once it comes up.
      • run hdfs namenode -rollback on 0.22 (this should fail but doesn't!)

      This leaves the name directory in a state such that the version file claims it's an 0.20 namespace, but the fsimage is in 0.22 format. It then crashes when trying to start up.

      1. hdfs-1969.txt
        7 kB
        Todd Lipcon
      2. hdfs-1969.txt
        7 kB
        Todd Lipcon
      3. hdfs-1969.txt
        7 kB
        Todd Lipcon

        Issue Links

          Activity

          Hide
          Todd Lipcon added a comment -

          This issue seems to extend back a ways – I just tried the same steps going from 0.18.3 to 0.20.2 and saw essentially the same behavior.

          I think what needs to happen is that, when you ask the NN to do a rollback, it will only succeed if that NN's LAYOUT_VERSION constant matches the version of the previous/ directory. Otherwise we should exit with an error message indicating that you should run "rollback" using the old version of the software instead of the new version.

          Show
          Todd Lipcon added a comment - This issue seems to extend back a ways – I just tried the same steps going from 0.18.3 to 0.20.2 and saw essentially the same behavior. I think what needs to happen is that, when you ask the NN to do a rollback, it will only succeed if that NN's LAYOUT_VERSION constant matches the version of the previous/ directory. Otherwise we should exit with an error message indicating that you should run "rollback" using the old version of the software instead of the new version.
          Hide
          Todd Lipcon added a comment -

          Here's a patch for trunk which fixes the issue (verified with unit test and manually).

          There were a couple problems with the test:

          • it didn't check the exception text at all, so some of the cases were actually failing due to an unrelated issue (the "previous" dir ended up with md5 checksums even though the version was prior to that feature)
          • one of the test cases ran the NN in upgrade mode instead of rollback mode, so wasn't actually testing what was intended.
          Show
          Todd Lipcon added a comment - Here's a patch for trunk which fixes the issue (verified with unit test and manually). There were a couple problems with the test: it didn't check the exception text at all, so some of the cases were actually failing due to an unrelated issue (the "previous" dir ended up with md5 checksums even though the version was prior to that feature) one of the test cases ran the NN in upgrade mode instead of rollback mode, so wasn't actually testing what was intended.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12479877/hdfs-1969.txt
          against trunk revision 1125217.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
          org.apache.hadoop.hdfs.TestFileConcurrentReader
          org.apache.hadoop.hdfs.TestHDFSTrash

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/598//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/598//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/598//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12479877/hdfs-1969.txt against trunk revision 1125217. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.TestDFSStorageStateRecovery org.apache.hadoop.hdfs.TestFileConcurrentReader org.apache.hadoop.hdfs.TestHDFSTrash +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/598//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/598//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/598//console This message is automatically generated.
          Hide
          Konstantin Shvachko added a comment -

          Good catch Todd. I definitely tested it back in old times. Too bad the tests were wrong.
          Couple nits on the patch

          1. We should print what is the current version of the NN in the message
            - " using this version of the NameNode. Please use the previous" +
            + " using NameNode version " + FSConstants.LAYOUT_VERSION + ". Please use " + prevState.getLayoutVersion() +
            
          2. Do you need to import IncorrectVersionException?
          3. Also in NNStorage I don't think we should allow writing older image version by the newer software. The NN should fail before that. And NN should be able to write ONLY current version image.
          Show
          Konstantin Shvachko added a comment - Good catch Todd. I definitely tested it back in old times. Too bad the tests were wrong. Couple nits on the patch We should print what is the current version of the NN in the message - " using this version of the NameNode. Please use the previous" + + " using NameNode version " + FSConstants.LAYOUT_VERSION + ". Please use " + prevState.getLayoutVersion() + Do you need to import IncorrectVersionException ? Also in NNStorage I don't think we should allow writing older image version by the newer software. The NN should fail before that. And NN should be able to write ONLY current version image.
          Hide
          Todd Lipcon added a comment -

          New patch attached.

          • I changed the error message to: "Cannot rollback to storage version N using this version of the NameNode, which uses storage version M. Please use the previous version of HDFS to perform the rollback."

          I think this wording is clearer, since users don't know the correspondance of storage versions to Hadoop versions. The phrase "NameNode version -18" would confuse them.

          • Removed the accidental import
          • Added a Preconditions check that storage.getLayoutVersion() == FSConstants.LAYOUT_VERSION when we save the image.
          Show
          Todd Lipcon added a comment - New patch attached. I changed the error message to: "Cannot rollback to storage version N using this version of the NameNode, which uses storage version M. Please use the previous version of HDFS to perform the rollback." I think this wording is clearer, since users don't know the correspondance of storage versions to Hadoop versions. The phrase "NameNode version -18" would confuse them. Removed the accidental import Added a Preconditions check that storage.getLayoutVersion() == FSConstants.LAYOUT_VERSION when we save the image.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12479950/hdfs-1969.txt
          against trunk revision 1125217.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
          org.apache.hadoop.hdfs.TestFileConcurrentReader
          org.apache.hadoop.hdfs.TestHDFSTrash

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/604//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/604//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/604//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12479950/hdfs-1969.txt against trunk revision 1125217. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.TestDFSStorageStateRecovery org.apache.hadoop.hdfs.TestFileConcurrentReader org.apache.hadoop.hdfs.TestHDFSTrash +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/604//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/604//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/604//console This message is automatically generated.
          Hide
          Todd Lipcon added a comment -

          failed tests are all failing for other reasons. Konst, does this look good for commit?

          Show
          Todd Lipcon added a comment - failed tests are all failing for other reasons. Konst, does this look good for commit?
          Hide
          Konstantin Shvachko added a comment -

          Todd, what I meant is that NNStorage.setFields() should not contain the if statement

          if (layoutVersion <= -26) {
          ...
          }
          

          I actually don't see where it is triggered.
          In general, we can allow these ifs in the loading part of the code, like loadFSImage(). But the saving part should be free of dependencies on the layout version, because there is only one LV - the current one.

          The precondition sounds good. But it would be better to just convert it to assert. I don't think we've used com.google.* before at least not in HDFS. Why introduce it now.

          Show
          Konstantin Shvachko added a comment - Todd, what I meant is that NNStorage.setFields() should not contain the if statement if (layoutVersion <= -26) { ... } I actually don't see where it is triggered. In general, we can allow these ifs in the loading part of the code, like loadFSImage(). But the saving part should be free of dependencies on the layout version, because there is only one LV - the current one. The precondition sounds good. But it would be better to just convert it to assert. I don't think we've used com.google.* before at least not in HDFS. Why introduce it now.
          Hide
          Todd Lipcon added a comment -

          I actually don't see where it is triggered.

          The test code uses this code in order to create VERSION files that look like they came from older versions. We could copy-paste some new code in to generate VERSION files, but that's a bit messy too.

          The precondition sounds good. But it would be better to just convert it to assert. I don't think we've used com.google.* before at least not in HDFS. Why introduce it now.

          There was a vote on the mailing list a few months back and people said they were OK with including Guava (com.google.*). The advantage of Preconditions over assert is that Preconditions will always run regardless of JVM options. In areas that aren't performance-sensitive, this is preferred.

          Show
          Todd Lipcon added a comment - I actually don't see where it is triggered. The test code uses this code in order to create VERSION files that look like they came from older versions. We could copy-paste some new code in to generate VERSION files, but that's a bit messy too. The precondition sounds good. But it would be better to just convert it to assert. I don't think we've used com.google.* before at least not in HDFS. Why introduce it now. There was a vote on the mailing list a few months back and people said they were OK with including Guava (com.google.*). The advantage of Preconditions over assert is that Preconditions will always run regardless of JVM options. In areas that aren't performance-sensitive, this is preferred.
          Hide
          Konstantin Shvachko added a comment -

          I think originally test were written under the assumption that VERSION file is the same for all versions. It is not anymore, so I agree with you we should have special methods generating version file, somewhere in UpgradeUtilities.

          I am not against Guava or the vote, I just think it is strange to introduce a new package dependency just for one line, which to me looks like a classic assert.

          Show
          Konstantin Shvachko added a comment - I think originally test were written under the assumption that VERSION file is the same for all versions. It is not anymore, so I agree with you we should have special methods generating version file, somewhere in UpgradeUtilities. I am not against Guava or the vote, I just think it is strange to introduce a new package dependency just for one line, which to me looks like a classic assert.
          Hide
          Todd Lipcon added a comment -

          OK, I will add a copy of the routine to save the VERSION file in the test tree.

          I just think it is strange to introduce a new package dependency just for one line

          The dependency was already introduced a few weeks back in common (so HDFS picks it up transitively). I'm just using it in this patch.

          Show
          Todd Lipcon added a comment - OK, I will add a copy of the routine to save the VERSION file in the test tree. I just think it is strange to introduce a new package dependency just for one line The dependency was already introduced a few weeks back in common (so HDFS picks it up transitively). I'm just using it in this patch.
          Hide
          Todd Lipcon added a comment -

          Hi Konstantin. I spent some time this morning trying to do this, but it would actually require a bit of refactoring to avoid introducing lots of duplicated code. The setFields method has lots of side effects beyond just setting up some properties. I think this refactoring is a good idea, but I'd prefer to separate it from this blocker so we can make progress towards getting the release out the door.

          Please note that this patch isn't the first case where we have a check against layoutVersion while writing the VERSION file – we already have such a check for federation-related fields.

          Show
          Todd Lipcon added a comment - Hi Konstantin. I spent some time this morning trying to do this, but it would actually require a bit of refactoring to avoid introducing lots of duplicated code. The setFields method has lots of side effects beyond just setting up some properties. I think this refactoring is a good idea, but I'd prefer to separate it from this blocker so we can make progress towards getting the release out the door. Please note that this patch isn't the first case where we have a check against layoutVersion while writing the VERSION file – we already have such a check for federation-related fields.
          Hide
          Todd Lipcon added a comment -

          Konst, would you mind +1ing this as is? If you want, we can file a JIRA for the overall improvement of removing the support for saving past-layout VERSION files from the main source tree. But, it looks moderately complex to do cleanly, and this patch as is fixes a blocker bug and doesn't make the issue any worse than it already is.

          Show
          Todd Lipcon added a comment - Konst, would you mind +1ing this as is? If you want, we can file a JIRA for the overall improvement of removing the support for saving past-layout VERSION files from the main source tree. But, it looks moderately complex to do cleanly, and this patch as is fixes a blocker bug and doesn't make the issue any worse than it already is.
          Hide
          Eli Collins added a comment -

          Todd, I agree that Konst's suggestion should be addressed in a separate jira, let's file one for that. Thanks for attempting it here.

          In NNStorage would the imageDigest ever be null now that we first check the layout version? The comment may be more clear if something like "Only write this field if writing an newer version file that supports image digests". Didn't know where -26 came from.

          Wrt the precondition, in this scenario we typically throw an AssertionError right? These are thrown when {{assert}}s are not enabled. If Precconditions are better how about filing a separate jira to replace our use of AssertionError with Preconditions so we're consistent throughout the project?

          Core change and tests looks great btw.

          Show
          Eli Collins added a comment - Todd, I agree that Konst's suggestion should be addressed in a separate jira, let's file one for that. Thanks for attempting it here. In NNStorage would the imageDigest ever be null now that we first check the layout version? The comment may be more clear if something like "Only write this field if writing an newer version file that supports image digests". Didn't know where -26 came from. Wrt the precondition, in this scenario we typically throw an AssertionError right? These are thrown when {{assert}}s are not enabled. If Precconditions are better how about filing a separate jira to replace our use of AssertionError with Preconditions so we're consistent throughout the project? Core change and tests looks great btw.
          Hide
          Todd Lipcon added a comment -

          Updated patch based on Eli's comments. I decided to use IllegalStateException since it describes what's going on more clearly than AssertionError here.

          I also took advantage of Suresh's new improvement which adds nice enums for LayoutVersion checks, and updated the comments in that area of the code.

          Show
          Todd Lipcon added a comment - Updated patch based on Eli's comments. I decided to use IllegalStateException since it describes what's going on more clearly than AssertionError here. I also took advantage of Suresh's new improvement which adds nice enums for LayoutVersion checks, and updated the comments in that area of the code.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12481413/hdfs-1969.txt
          against trunk revision 1131223.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/703//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/703//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/703//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12481413/hdfs-1969.txt against trunk revision 1131223. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/703//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/703//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/703//console This message is automatically generated.
          Hide
          Eli Collins added a comment -

          +1 latest patch lgtm. Mind filing another jira for the setFields refactoring and using Precconditions?

          Show
          Eli Collins added a comment - +1 latest patch lgtm. Mind filing another jira for the setFields refactoring and using Precconditions?
          Hide
          Eli Collins added a comment -

          Forgot to mention, 1 nit: "laytouVersions"

          Show
          Eli Collins added a comment - Forgot to mention, 1 nit: "laytouVersions"
          Hide
          Todd Lipcon added a comment -

          Thanks Eli. I filed HDFS-2037 for the setFields cleanup.

          Committed this one to 0.22 and trunk (fixed the spelling typo before commit)

          Show
          Todd Lipcon added a comment - Thanks Eli. I filed HDFS-2037 for the setFields cleanup. Committed this one to 0.22 and trunk (fixed the spelling typo before commit)
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #714 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/714/)
          HDFS-1969. Running rollback on new-version namenode destroys the namespace. Contributed by Todd Lipcon.

          todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1132525
          Files :

          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/common/Storage.java
          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDFSRollback.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
          • /hadoop/hdfs/trunk/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #714 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/714/ ) HDFS-1969 . Running rollback on new-version namenode destroys the namespace. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1132525 Files : /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/common/Storage.java /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDFSRollback.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java /hadoop/hdfs/trunk/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-22-branch #61 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/61/)
          HDFS-1969. Running rollback on new-version namenode destroys the namespace. Contributed by Todd Lipcon.

          todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1132527
          Files :

          • /hadoop/hdfs/branches/branch-0.22/src/test/hdfs/org/apache/hadoop/hdfs/TestDFSRollback.java
          • /hadoop/hdfs/branches/branch-0.22/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
          • /hadoop/hdfs/branches/branch-0.22/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-22-branch #61 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/61/ ) HDFS-1969 . Running rollback on new-version namenode destroys the namespace. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1132527 Files : /hadoop/hdfs/branches/branch-0.22/src/test/hdfs/org/apache/hadoop/hdfs/TestDFSRollback.java /hadoop/hdfs/branches/branch-0.22/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java /hadoop/hdfs/branches/branch-0.22/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #689 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/689/)
          HDFS-1969. Running rollback on new-version namenode destroys the namespace. Contributed by Todd Lipcon.

          todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1132525
          Files :

          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/common/Storage.java
          • /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDFSRollback.java
          • /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
          • /hadoop/hdfs/trunk/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #689 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/689/ ) HDFS-1969 . Running rollback on new-version namenode destroys the namespace. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1132525 Files : /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/common/Storage.java /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestDFSRollback.java /hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java /hadoop/hdfs/trunk/CHANGES.txt

            People

            • Assignee:
              Todd Lipcon
              Reporter:
              Todd Lipcon
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development