Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-718

configuration parameter to prevent accidental formatting of HDFS filesystem

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 0.22.0
    • Fix Version/s: 0.22.0
    • Component/s: namenode
    • Labels:
      None
    • Environment:

      Any

    • Hadoop Flags:
      Reviewed

      Description

      Currently, any time the NameNode is not running, an HDFS filesystem will accept the 'format' command, and will duly format itself. There are those of us who have multi-PB HDFS filesystems who are really quite uncomfortable with this behavior. There is "Y/N" confirmation in the format command, but if the formatter genuinely believes themselves to be doing the right thing, the filesystem will be formatted.

      This patch adds a configuration parameter to the namenode, dfs.namenode.support.allowformat, which defaults to "true," the current behavior: always allow formatting if the NameNode is down or some other process is not holding the namenode lock. But if dfs.namenode.support.allowformat is set to "false," the NameNode will not allow itself to be formatted until this config parameter is changed to "true".

      The general idea is that for production HDFS filesystems, the user would format the HDFS once, then set dfs.namenode.support.allowformat to "false" for all time.

      The attached patch was generated against trunk and +1's on my test machine. We have a 0.20 version that we are using in our cluster as well.

      1. HDFS-718.patch.txt
        7 kB
        Andrew Ryan
      2. HDFS-718.patch-2.txt
        8 kB
        Andrew Ryan
      3. HDFS-718-3.patch
        11 kB
        Jakob Homan
      4. HDFS-718-4.patch
        9 kB
        Jakob Homan
      5. HDFS-718-5.patch
        9 kB
        Jakob Homan

        Activity

        Hide
        Andrew Ryan added a comment -

        Patch for issue

        Show
        Andrew Ryan added a comment - Patch for issue
        Hide
        Andrew Ryan added a comment -

        Really attaching the patch!

        Show
        Andrew Ryan added a comment - Really attaching the patch!
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12422742/HDFS-718.patch.txt
        against trunk revision 826905.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 4 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed core unit tests.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/44/testReport/
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/44/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/44/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/44/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12422742/HDFS-718.patch.txt against trunk revision 826905. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 4 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/44/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/44/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/44/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/44/console This message is automatically generated.
        Hide
        Andrew Ryan added a comment -

        The sole core test failure was org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend, which is completely unrelated to this patch (and according to Dhruba, an area of code in some flux right now). I think it is ready for review.

        Show
        Andrew Ryan added a comment - The sole core test failure was org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend, which is completely unrelated to this patch (and according to Dhruba, an area of code in some flux right now). I think it is ready for review.
        Hide
        dhruba borthakur added a comment -

        hi allen, koji: can somebody pl review this patch and comment on whether it is useful to the general hadoop community?

        Show
        dhruba borthakur added a comment - hi allen, koji: can somebody pl review this patch and comment on whether it is useful to the general hadoop community?
        Hide
        Rajiv Chittajallu added a comment -

        Prompting for conformation before formatting the fs.name.dir is good enough. If the can confirm for the prompt, what will stop her from updating the configuration?
        How do you avoid operator doing rm -r <fs.name.dir> ?

        A better solution to avoid operator error is to take regular backups of namespace.

        Show
        Rajiv Chittajallu added a comment - Prompting for conformation before formatting the fs.name.dir is good enough. If the can confirm for the prompt, what will stop her from updating the configuration? How do you avoid operator doing rm -r <fs.name.dir> ? A better solution to avoid operator error is to take regular backups of namespace.
        Hide
        Andrew Ryan added a comment -

        The issue we're trying to address is one where the user thinks they are formatting the correct namenode, but actually are formatting the wrong namenode. In our environment we have a team of people administering multiple clusters, some of which may be important production ones which should never be formatted, and some of which are development or test clusters which might get formatted more frequently. And the admins may have multiple windows open on their screen at the same time showing different clusters, which can increase the chances of errors.

        This patch doesn't protect against 'rm -rf', the administrator changing the configuration, or other sorts of malice, but that isn't the intention of the patch. Nor is this patch meant to be a substitute for regular backups of the filesystem image. The intention is to provide an additional safeguard against administrative error. Backups do not prevent operator error ; they merely allow recovery from it. Better to never lose your filesystem image than to have to try and recover it from a backup.

        Show
        Andrew Ryan added a comment - The issue we're trying to address is one where the user thinks they are formatting the correct namenode, but actually are formatting the wrong namenode. In our environment we have a team of people administering multiple clusters, some of which may be important production ones which should never be formatted, and some of which are development or test clusters which might get formatted more frequently. And the admins may have multiple windows open on their screen at the same time showing different clusters, which can increase the chances of errors. This patch doesn't protect against 'rm -rf', the administrator changing the configuration, or other sorts of malice, but that isn't the intention of the patch. Nor is this patch meant to be a substitute for regular backups of the filesystem image. The intention is to provide an additional safeguard against administrative error. Backups do not prevent operator error ; they merely allow recovery from it. Better to never lose your filesystem image than to have to try and recover it from a backup.
        Hide
        dhruba borthakur added a comment -

        Hi Rajiv: no software can ensure that the administrator not change the config...this is an operational procedure. But don't you agree that it is better if we can ensure that multiple administrative-failures have to happen before the cluster can be reformatted? In this case, the "confirm prompt" together with the config value makes the protection slightly more secure, isn't it?

        Show
        dhruba borthakur added a comment - Hi Rajiv: no software can ensure that the administrator not change the config...this is an operational procedure. But don't you agree that it is better if we can ensure that multiple administrative-failures have to happen before the cluster can be reformatted? In this case, the "confirm prompt" together with the config value makes the protection slightly more secure, isn't it?
        Hide
        Allen Wittenauer added a comment -

        The HDFS implementation currently matches what mkfs and related tools do: if specify the wrong slice, it is going to get formatted.

        I think the biggest problem here is really that the startup message is too verbose for an admin to notice they are on the wrong node. [If you are running multiple namenodes on the same node, well... you've got other issues.] I'd much rather see a patch that changes the startup message.

        Show
        Allen Wittenauer added a comment - The HDFS implementation currently matches what mkfs and related tools do: if specify the wrong slice, it is going to get formatted. I think the biggest problem here is really that the startup message is too verbose for an admin to notice they are on the wrong node. [If you are running multiple namenodes on the same node, well... you've got other issues.] I'd much rather see a patch that changes the startup message.
        Hide
        Todd Lipcon added a comment -

        I agree with Allen and Rajiv here somewhat.

        How about a change to the confirmation message that said something like:

            • You are about to format the namenode on foo-cluster.corp.facebook.com ***
              This filesystem currently contains 2934823 files and was last formatted on 9/10/09.
              To confirm deletion, please enter the number of files contained in this namenode:

        It's a little bit on the silly side, but would force the operator to be aware of exactly how much data they're deleting. Naturally we should provide a "-y" equivalent that skips the confirmation for use in scripts.

        (I suppose the number of files is actually impossible to determine without reading the namespace and replaying the edit log, but the same idea could be done with the hostname - basically anything to get the admin off of "autopilot")

        Show
        Todd Lipcon added a comment - I agree with Allen and Rajiv here somewhat. How about a change to the confirmation message that said something like: You are about to format the namenode on foo-cluster.corp.facebook.com *** This filesystem currently contains 2934823 files and was last formatted on 9/10/09. To confirm deletion, please enter the number of files contained in this namenode: It's a little bit on the silly side, but would force the operator to be aware of exactly how much data they're deleting. Naturally we should provide a "-y" equivalent that skips the confirmation for use in scripts. (I suppose the number of files is actually impossible to determine without reading the namespace and replaying the edit log, but the same idea could be done with the hostname - basically anything to get the admin off of "autopilot")
        Hide
        dhruba borthakur added a comment -

        @Allen, thanks for your comments. Please let us know how you would like to change the "startup" message and why it helps accidental format's of the namenode.

        My thinking is that as far as "format" of HDFS is concerned, it is better to be safe than sorry. This patch leaves the default behaviour as it currently is; but is a boon for a paranoid administrator because he/she can rest assured that "bin/hadoop namenode -format" will not work against a NN that is configured with this option. Of course, there are plenty other ways to screw up the fsimage/edits files, but if we can prevent some of the commonly-occuring scenarios, that will be helpful.

        If the functionality of this patch already existed in HDFS, then my belief is that most HDFS admins would probably already set this config parameter to prevent accidental formatting of namenode.

        Show
        dhruba borthakur added a comment - @Allen, thanks for your comments. Please let us know how you would like to change the "startup" message and why it helps accidental format's of the namenode. My thinking is that as far as "format" of HDFS is concerned, it is better to be safe than sorry. This patch leaves the default behaviour as it currently is; but is a boon for a paranoid administrator because he/she can rest assured that "bin/hadoop namenode -format" will not work against a NN that is configured with this option. Of course, there are plenty other ways to screw up the fsimage/edits files, but if we can prevent some of the commonly-occuring scenarios, that will be helpful. If the functionality of this patch already existed in HDFS, then my belief is that most HDFS admins would probably already set this config parameter to prevent accidental formatting of namenode.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        If we are going to do this, the new configuration key should be listed in org.apache.hadoop.hdfs.DFSConfigKeys.

        Show
        Tsz Wo Nicholas Sze added a comment - If we are going to do this, the new configuration key should be listed in org.apache.hadoop.hdfs.DFSConfigKeys.
        Hide
        dhruba borthakur added a comment -

        Thanks Nicholas, Todd and Allen for the comments.

        Todd: The idea of the proposed configuration to is to ensure that no scripts can format thisnamenode, however hard it may try. The main purpose is to not add the "-y" option. This is for the paranoid adminstrator who, for sure, never wants any scripts to format this namenode.

        Show
        dhruba borthakur added a comment - Thanks Nicholas, Todd and Allen for the comments. Todd: The idea of the proposed configuration to is to ensure that no scripts can format thisnamenode, however hard it may try. The main purpose is to not add the "-y" option. This is for the paranoid adminstrator who, for sure, never wants any scripts to format this namenode.
        Hide
        Andrew Ryan added a comment -

        Updated patch, including change to DFSConfigKeys.java

        Show
        Andrew Ryan added a comment - Updated patch, including change to DFSConfigKeys.java
        Hide
        Andrew Ryan added a comment -

        I just attached an updated patch which adds the key to org.apache.hadoop.hdfs.DFSConfigKeys.

        @Todd: I'm fine with revising the confirmation message, but at format time the Namenode is down, so there's no easy/cheap way of getting this information about the filesystem that either I or Dhruba know. We're certainly open to suggestions though.

        Show
        Andrew Ryan added a comment - I just attached an updated patch which adds the key to org.apache.hadoop.hdfs.DFSConfigKeys. @Todd: I'm fine with revising the confirmation message, but at format time the Namenode is down, so there's no easy/cheap way of getting this information about the filesystem that either I or Dhruba know. We're certainly open to suggestions though.
        Hide
        Allen Wittenauer added a comment -

        Why are you concerned about making this a "cheap" operation?

        Also, why does the name node need to be up to get this info? format runs on the machine that the data is stored: so you have access to the hostname and the fsimage.. which means that the message that Todd suggests can be calculated.

        Maybe I'm just missing something here. All I'm really seeing is that someone at Facebook wasn't paying attention and made a mistake. It happens. But chances are good that a script that formats the name node is also going to be putting down a config at the same time. Perhaps instead the format script you guys have should be checking the size of the image.

        I'm just not buying this is a change that needs to be made, especially compared to other file system technologies.

        Show
        Allen Wittenauer added a comment - Why are you concerned about making this a "cheap" operation? Also, why does the name node need to be up to get this info? format runs on the machine that the data is stored: so you have access to the hostname and the fsimage.. which means that the message that Todd suggests can be calculated. Maybe I'm just missing something here. All I'm really seeing is that someone at Facebook wasn't paying attention and made a mistake. It happens. But chances are good that a script that formats the name node is also going to be putting down a config at the same time. Perhaps instead the format script you guys have should be checking the size of the image. I'm just not buying this is a change that needs to be made, especially compared to other file system technologies.
        Hide
        dhruba borthakur added a comment -

        It appears that we do nto have consensus on this one. Cancelling patch because it does not merge with current trunk and letting Andrew decide whether he wants to pursue this further or now.

        Show
        dhruba borthakur added a comment - It appears that we do nto have consensus on this one. Cancelling patch because it does not merge with current trunk and letting Andrew decide whether he wants to pursue this further or now.
        Hide
        Allen Wittenauer added a comment -

        FWIW, after attending the hadoop contributor meeting last week, I can see now why FB would want the general 'do not format' behavior given the current lay of the land. Assuming AvatarNode comes into full, would this still be desired/required?

        But I'm still convinced a better message would be sufficient (and is highly desirable in any case).

        Show
        Allen Wittenauer added a comment - FWIW, after attending the hadoop contributor meeting last week, I can see now why FB would want the general 'do not format' behavior given the current lay of the land. Assuming AvatarNode comes into full, would this still be desired/required? But I'm still convinced a better message would be sufficient (and is highly desirable in any case).
        Hide
        Todd Lipcon added a comment -

        Woops, I forgot to watch this one and it just came back across my radar

        I don't see any harm in adding the config option suggested here - I was just wondering if there were another way of achieving the same goal without adding Even More Configs. So +0 from me.

        Show
        Todd Lipcon added a comment - Woops, I forgot to watch this one and it just came back across my radar I don't see any harm in adding the config option suggested here - I was just wondering if there were another way of achieving the same goal without adding Even More Configs. So +0 from me.
        Hide
        Jakob Homan added a comment -

        I'm +1. Once an Hadoop cluster is up and running in production it can potentially hold very critical and valuable information. An extra, optional safeguard that saves one such cluster and doesn't add any serious complexity to the code is worth it. A steadystate cluster is a very valuable thing...

        Show
        Jakob Homan added a comment - I'm +1. Once an Hadoop cluster is up and running in production it can potentially hold very critical and valuable information. An extra, optional safeguard that saves one such cluster and doesn't add any serious complexity to the code is worth it. A steadystate cluster is a very valuable thing...
        Hide
        dhruba borthakur added a comment -

        Now that that issue is resurrected, I still think that this is a very valuable option for the paranoid administrator.

        Show
        dhruba borthakur added a comment - Now that that issue is resurrected, I still think that this is a very valuable option for the paranoid administrator.
        Hide
        Jakob Homan added a comment -

        One nit on patch: Since the patch was changed correctly to use the DFS constants - DFS_NAMENODE_SUPPORT_ALLOWFORMAT_KEY = "dfs.namenode.support.allowformat"; - we should really reference this key only through the constant and not directly. Otherwise, +1 as it is.

        Show
        Jakob Homan added a comment - One nit on patch: Since the patch was changed correctly to use the DFS constants - DFS_NAMENODE_SUPPORT_ALLOWFORMAT_KEY = "dfs.namenode.support.allowformat"; - we should really reference this key only through the constant and not directly. Otherwise, +1 as it is.
        Hide
        Jakob Homan added a comment -

        Went to review the patch and found it no longer applied. Sync'ed with trunk and did some clean up. Mainly, switched to using DFSConfigKeys, fixed some values in the MiniDFSCluster setup, cleaned up logging, changed config value to not run together allowformat. If a committer wants to +1 these changes, I feel this patch is ready to go, barring further objections.

        Show
        Jakob Homan added a comment - Went to review the patch and found it no longer applied. Sync'ed with trunk and did some clean up. Mainly, switched to using DFSConfigKeys, fixed some values in the MiniDFSCluster setup, cleaned up logging, changed config value to not run together allowformat. If a committer wants to +1 these changes, I feel this patch is ready to go, barring further objections.
        Hide
        Jakob Homan added a comment -

        submitting v3 of patch to Hudson.

        Show
        Jakob Homan added a comment - submitting v3 of patch to Hudson.
        Hide
        Jakob Homan added a comment -

        Hudson's AWOL: test-commit passes fine, so does test-patch:

        [exec] +1 overall.  
        [exec] 
        [exec]     +1 @author.  The patch does not contain any @author tags.
        [exec] 
        [exec]     +1 tests included.  The patch appears to include 3 new or modified tests.
        [exec] 
        [exec]     +1 javadoc.  The javadoc tool did not generate any warning messages.
        [exec] 
        [exec]     +1 javac.  The applied patch does not increase the total number of javac compiler warnings.
        [exec] 
        [exec]     +1 findbugs.  The patch does not introduce any new Findbugs warnings.
        [exec] 
        [exec]     +1 release audit.  The applied patch does not increase the total number of release audit warnings.
        [exec] 
        [exec]     +1 system tests framework.  The patch passed system tests framework compile.
        Show
        Jakob Homan added a comment - Hudson's AWOL: test-commit passes fine, so does test-patch: [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 3 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] +1 system tests framework. The patch passed system tests framework compile.
        Hide
        Boris Shkolnik added a comment -

        looks good.
        One thing though.. I think we can move cluster.shutdown into tearDown method to make sure it gets shut down.

        otherwise +1.

        Show
        Boris Shkolnik added a comment - looks good. One thing though.. I think we can move cluster.shutdown into tearDown method to make sure it gets shut down. otherwise +1.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12454051/HDFS-718-3.patch
        against trunk revision 997157.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed core unit tests.

        -1 contrib tests. The patch failed contrib unit tests.

        +1 system tests framework. The patch passed system tests framework compile.

        Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/238/testReport/
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/238/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/238/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/238/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12454051/HDFS-718-3.patch against trunk revision 997157. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. +1 system tests framework. The patch passed system tests framework compile. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/238/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/238/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/238/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/238/console This message is automatically generated.
        Hide
        Jakob Homan added a comment -

        Here's an updated patch addressing Boris' comment and using the new MiniDFSCluster builder.

        Test-patch is fine (with known-bad audit warning)

             [exec] -1 overall.  
             [exec] 
             [exec]     +1 @author.  The patch does not contain any @author tags.
             [exec] 
             [exec]     +1 tests included.  The patch appears to include 3 new or modified tests.
             [exec] 
             [exec]     +1 javadoc.  The javadoc tool did not generate any warning messages.
             [exec] 
             [exec]     +1 javac.  The applied patch does not increase the total number of javac compiler warnings.
             [exec] 
             [exec]     +1 findbugs.  The patch does not introduce any new Findbugs warnings.
             [exec] 
             [exec]     -1 release audit.  The applied patch generated 98 release audit warnings (more than the trunk's current 1 warnings).
             [exec] 
             [exec]     +1 system test framework.  The patch passed system test framework compile.

        I'd like to get this into 22. Since the change in this code only affected the unit test, which has been verified, I'm ready to go ahead and commit it. How's that sound?

        Show
        Jakob Homan added a comment - Here's an updated patch addressing Boris' comment and using the new MiniDFSCluster builder. Test-patch is fine (with known-bad audit warning) [exec] -1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 3 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] -1 release audit. The applied patch generated 98 release audit warnings (more than the trunk's current 1 warnings). [exec] [exec] +1 system test framework. The patch passed system test framework compile. I'd like to get this into 22. Since the change in this code only affected the unit test, which has been verified, I'm ready to go ahead and commit it. How's that sound?
        Hide
        Eli Collins added a comment -

        The Logs and throws in the tests right after fail statements will never execute. Otherwise the patch looks good to me.

        Show
        Eli Collins added a comment - The Logs and throws in the tests right after fail statements will never execute. Otherwise the patch looks good to me.
        Hide
        Jakob Homan added a comment -

        Great catch Eli. I've removed the try-catches around those statements. Now exceptions will go straight up. I think this is good to go.

        Show
        Jakob Homan added a comment - Great catch Eli. I've removed the try-catches around those statements. Now exceptions will go straight up. I think this is good to go.
        Hide
        Jakob Homan added a comment -

        I've commited this. Resolving as fixed. Thanks, Ryan!

        Show
        Jakob Homan added a comment - I've commited this. Resolving as fixed. Thanks, Ryan!

          People

          • Assignee:
            Andrew Ryan
            Reporter:
            Andrew Ryan
          • Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development