Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12420

Disable Namenode format for prod clusters when data already exists

    Details

    • Type: Improvement
    • Status: Patch Available
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Incompatible change

      Description

      Disable NameNode format to avoid accidental formatting of Namenode in production cluster. If someone really wants to delete the complete fsImage, they can first delete the metadata dir and then run

       hdfs namenode -format

      manually.

      1. HDFS-12420.01.patch
        2 kB
        Ajay Kumar
      2. HDFS-12420.02.patch
        7 kB
        Ajay Kumar
      3. HDFS-12420.03.patch
        12 kB
        Ajay Kumar
      4. HDFS-12420.04.patch
        11 kB
        Ajay Kumar
      5. HDFS-12420.05.patch
        11 kB
        Ajay Kumar
      6. HDFS-12420.06.patch
        9 kB
        Ajay Kumar
      7. HDFS-12420.07.patch
        11 kB
        Ajay Kumar

        Activity

        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 16s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
              trunk Compile Tests
        +1 mvninstall 15m 23s trunk passed
        +1 compile 0m 58s trunk passed
        +1 checkstyle 0m 37s trunk passed
        +1 mvnsite 0m 56s trunk passed
        +1 findbugs 1m 40s trunk passed
        +1 javadoc 0m 41s trunk passed
              Patch Compile Tests
        +1 mvninstall 0m 50s the patch passed
        +1 compile 0m 45s the patch passed
        +1 javac 0m 45s the patch passed
        +1 checkstyle 0m 34s the patch passed
        +1 mvnsite 0m 51s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 44s the patch passed
        +1 javadoc 0m 36s the patch passed
              Other Tests
        -1 unit 99m 47s hadoop-hdfs in the patch failed.
        +1 asflicense 0m 19s The patch does not generate ASF License warnings.
        127m 19s



        Reason Tests
        Failed junit tests hadoop.hdfs.server.namenode.TestSaveNamespace
          hadoop.hdfs.qjournal.TestNNWithQJM
          hadoop.hdfs.TestFileAppendRestart
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
          hadoop.hdfs.TestEncryptedTransfer
          hadoop.hdfs.server.namenode.TestNameEditsConfigs
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
          hadoop.hdfs.TestLeaseRecoveryStriped
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
          hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
          hadoop.hdfs.server.namenode.TestGenericJournalConf
          hadoop.hdfs.TestAclsEndToEnd
          hadoop.hdfs.TestDistributedFileSystem
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120
          hadoop.hdfs.server.namenode.TestClusterId
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
          hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
        Timed out junit tests org.apache.hadoop.hdfs.TestLeaseRecovery2
          org.apache.hadoop.hdfs.TestWriteReadStripedFile
          org.apache.hadoop.hdfs.server.namenode.TestEditLogRace



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:71bbb86
        JIRA Issue HDFS-12420
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12886537/HDFS-12420.01.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 6747d6126762 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 661f5eb
        Default Java 1.8.0_144
        findbugs v3.1.0-RC1
        unit https://builds.apache.org/job/PreCommit-HDFS-Build/21083/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21083/testReport/
        modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
        Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21083/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.       trunk Compile Tests +1 mvninstall 15m 23s trunk passed +1 compile 0m 58s trunk passed +1 checkstyle 0m 37s trunk passed +1 mvnsite 0m 56s trunk passed +1 findbugs 1m 40s trunk passed +1 javadoc 0m 41s trunk passed       Patch Compile Tests +1 mvninstall 0m 50s the patch passed +1 compile 0m 45s the patch passed +1 javac 0m 45s the patch passed +1 checkstyle 0m 34s the patch passed +1 mvnsite 0m 51s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 44s the patch passed +1 javadoc 0m 36s the patch passed       Other Tests -1 unit 99m 47s hadoop-hdfs in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 127m 19s Reason Tests Failed junit tests hadoop.hdfs.server.namenode.TestSaveNamespace   hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.TestFileAppendRestart   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy   hadoop.hdfs.TestEncryptedTransfer   hadoop.hdfs.server.namenode.TestNameEditsConfigs   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040   hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090   hadoop.hdfs.server.namenode.TestGenericJournalConf   hadoop.hdfs.TestAclsEndToEnd   hadoop.hdfs.TestDistributedFileSystem   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120   hadoop.hdfs.server.namenode.TestClusterId   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 Timed out junit tests org.apache.hadoop.hdfs.TestLeaseRecovery2   org.apache.hadoop.hdfs.TestWriteReadStripedFile   org.apache.hadoop.hdfs.server.namenode.TestEditLogRace Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-12420 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12886537/HDFS-12420.01.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 6747d6126762 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 661f5eb Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HDFS-Build/21083/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21083/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21083/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        shahrs87 Rushabh S Shah added a comment -

        Ajay Kumar Can you please write a test case for the new behavior ?

        Show
        shahrs87 Rushabh S Shah added a comment - Ajay Kumar Can you please write a test case for the new behavior ?
        Hide
        arpitagarwal Arpit Agarwal added a comment - - edited

        Thanks for this improvement Ajay Kumar. Couple of comments, in addition to the test case as suggested by Rushabh:

        1. Let's also make the -force option a no-op. We can continue to accept it but it should have no effect and we should print a warning saying that the force option is being ignored.
        2. Same with the -nonInteractive option.
        3. Also let's update the site documentation with the new behavior:
          https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode
          The site docs are in HDFScommands.md
        Show
        arpitagarwal Arpit Agarwal added a comment - - edited Thanks for this improvement Ajay Kumar . Couple of comments, in addition to the test case as suggested by Rushabh: Let's also make the -force option a no-op. We can continue to accept it but it should have no effect and we should print a warning saying that the force option is being ignored. Same with the -nonInteractive option. Also let's update the site documentation with the new behavior: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode The site docs are in HDFScommands.md
        Hide
        vagarychen Chen Liang added a comment -

        Thanks for taking care of this Ajay Kumar!

        Some of the failed tests seem related. For example seems that TestSaveNamespace#testTxIdPersistence fails because it tries to formatName but the previous test has data leftover in test dir, so the format aborted. Consequently the following txid assertion in this test also fail. We will need to delete test directory content for certain tests.

        Show
        vagarychen Chen Liang added a comment - Thanks for taking care of this Ajay Kumar ! Some of the failed tests seem related. For example seems that TestSaveNamespace#testTxIdPersistence fails because it tries to formatName but the previous test has data leftover in test dir, so the format aborted. Consequently the following txid assertion in this test also fail. We will need to delete test directory content for certain tests.
        Hide
        ajayydv Ajay Kumar added a comment -

        Rushabh Shah,Arpit Agarwal,Chen Liang thanks for review. Attaching new patch with suggested changes.

        Show
        ajayydv Ajay Kumar added a comment - Rushabh Shah , Arpit Agarwal , Chen Liang thanks for review. Attaching new patch with suggested changes.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 17s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
              trunk Compile Tests
        +1 mvninstall 14m 19s trunk passed
        +1 compile 0m 49s trunk passed
        +1 checkstyle 0m 38s trunk passed
        +1 mvnsite 0m 53s trunk passed
        +1 findbugs 1m 41s trunk passed
        +1 javadoc 0m 41s trunk passed
              Patch Compile Tests
        +1 mvninstall 1m 0s the patch passed
        +1 compile 1m 0s the patch passed
        +1 javac 1m 0s the patch passed
        -0 checkstyle 0m 40s hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 184 unchanged - 1 fixed = 189 total (was 185)
        +1 mvnsite 1m 3s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 2m 2s the patch passed
        +1 javadoc 0m 38s the patch passed
              Other Tests
        -1 unit 95m 35s hadoop-hdfs in the patch failed.
        +1 asflicense 0m 18s The patch does not generate ASF License warnings.
        122m 53s



        Reason Tests
        Failed junit tests hadoop.hdfs.server.namenode.TestClusterId
          hadoop.hdfs.TestClientProtocolForPipelineRecovery
          hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
          hadoop.hdfs.server.namenode.TestGenericJournalConf
          hadoop.hdfs.qjournal.TestNNWithQJM
          hadoop.hdfs.server.namenode.TestAllowFormat
          hadoop.hdfs.server.namenode.TestReencryptionWithKMS
          hadoop.hdfs.TestLeaseRecoveryStriped
          hadoop.hdfs.server.namenode.TestNameEditsConfigs
          hadoop.hdfs.server.datanode.TestDirectoryScanner
          hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
          hadoop.hdfs.TestDistributedFileSystem
          hadoop.hdfs.TestReplaceDatanodeOnFailure
          hadoop.hdfs.TestReconstructStripedFile
          hadoop.hdfs.TestLease
        Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:71bbb86
        JIRA Issue HDFS-12420
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12886764/HDFS-12420.02.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux d719cb91ece5 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / f4b6267
        Default Java 1.8.0_144
        findbugs v3.1.0-RC1
        checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/21108/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
        unit https://builds.apache.org/job/PreCommit-HDFS-Build/21108/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21108/testReport/
        modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
        Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21108/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.       trunk Compile Tests +1 mvninstall 14m 19s trunk passed +1 compile 0m 49s trunk passed +1 checkstyle 0m 38s trunk passed +1 mvnsite 0m 53s trunk passed +1 findbugs 1m 41s trunk passed +1 javadoc 0m 41s trunk passed       Patch Compile Tests +1 mvninstall 1m 0s the patch passed +1 compile 1m 0s the patch passed +1 javac 1m 0s the patch passed -0 checkstyle 0m 40s hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 184 unchanged - 1 fixed = 189 total (was 185) +1 mvnsite 1m 3s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 2s the patch passed +1 javadoc 0m 38s the patch passed       Other Tests -1 unit 95m 35s hadoop-hdfs in the patch failed. +1 asflicense 0m 18s The patch does not generate ASF License warnings. 122m 53s Reason Tests Failed junit tests hadoop.hdfs.server.namenode.TestClusterId   hadoop.hdfs.TestClientProtocolForPipelineRecovery   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations   hadoop.hdfs.server.namenode.TestGenericJournalConf   hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.server.namenode.TestAllowFormat   hadoop.hdfs.server.namenode.TestReencryptionWithKMS   hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.server.namenode.TestNameEditsConfigs   hadoop.hdfs.server.datanode.TestDirectoryScanner   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.TestDistributedFileSystem   hadoop.hdfs.TestReplaceDatanodeOnFailure   hadoop.hdfs.TestReconstructStripedFile   hadoop.hdfs.TestLease Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-12420 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12886764/HDFS-12420.02.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d719cb91ece5 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / f4b6267 Default Java 1.8.0_144 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/21108/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/21108/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21108/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21108/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        anu Anu Engineer added a comment -

        Allen Wittenauer This does break backward compatibility. So wanted to hear your thought on this.
        The reason why we are doing this is that people are capable of formatting clusters with data on them . Just wondering how big of an issue would this be if we put this in 3.0?. Appreciate any comments you might have.

        Show
        anu Anu Engineer added a comment - Allen Wittenauer This does break backward compatibility. So wanted to hear your thought on this. The reason why we are doing this is that people are capable of formatting clusters with data on them . Just wondering how big of an issue would this be if we put this in 3.0?. Appreciate any comments you might have.
        Hide
        aw Allen Wittenauer added a comment -

        Don't we already have the y/n check when data exists? Why do we need another?

        Show
        aw Allen Wittenauer added a comment - Don't we already have the y/n check when data exists? Why do we need another?
        Hide
        aw Allen Wittenauer added a comment -

        The more I think about this, the more I'm -1:

        Completely breaks automation. Automation MUST work.

        Let's also make the -force option a no-op. We can continue to accept it but it should have no effect and we should print a warning saying that the force option is being ignored.

        This just makes it worse. HDFS-5138 was a disaster for automation when -finalize was made a no-op. See HDFS-8241 for the follow-up to clean it up.

        Show
        aw Allen Wittenauer added a comment - The more I think about this, the more I'm -1: Completely breaks automation. Automation MUST work. Let's also make the -force option a no-op. We can continue to accept it but it should have no effect and we should print a warning saying that the force option is being ignored. This just makes it worse. HDFS-5138 was a disaster for automation when -finalize was made a no-op. See HDFS-8241 for the follow-up to clean it up.
        Hide
        anu Anu Engineer added a comment -

        Don't we already have the y/n check when data exists? Why do we need another?

        We do, but the fact that it not very clear with lots of other text on the screen was pointed out by a cluster owner, who was visibly distressed.

        We are just trying to avoid losing data by operator mistake. I thought that you might have a concern with automation that is why I flagged it for your consideration. Let me try to understand that a bit more, do you think people automate formatting the clusters? if they do, then preventing accidental data loss is all the more important.

        From an HDFS user hat on, I think this is a good improvement to have. I would expect HDFS to refuse to format a cluster with data. But from a sysadmin/developer hat on, I do like that fact that I can format a cluster with data. I do that when I test and develop.

        So in my mind, the question boils down to easier dev/ops cycles vs. user safety. The reason why this is filed for 3.0 is that it might be our last opportunity to make this change.

        Completely breaks automation. Automation MUST work.

        I see that you are voting with the devops hat on, and I do not disagree. But this is a place where breaking the automation might avoid a disaster for some poor user. One more data point, this JIRA is based on real feedback from a real large cluster. I am not apologizing for sloppy operation but trying to understand what we can do to prevent a user from making such a mistake.

        I am presuming (please correct me if I am wrong) that you are not objecting to the change or the intent per se, but more about the fact that we are out right refusing to format a cluster with Namenode metadata. Do you think adding a flag which says -DothisIamReallySmart address the automation concern?

        Show
        anu Anu Engineer added a comment - Don't we already have the y/n check when data exists? Why do we need another? We do, but the fact that it not very clear with lots of other text on the screen was pointed out by a cluster owner, who was visibly distressed. We are just trying to avoid losing data by operator mistake. I thought that you might have a concern with automation that is why I flagged it for your consideration. Let me try to understand that a bit more, do you think people automate formatting the clusters? if they do, then preventing accidental data loss is all the more important. From an HDFS user hat on, I think this is a good improvement to have. I would expect HDFS to refuse to format a cluster with data. But from a sysadmin/developer hat on, I do like that fact that I can format a cluster with data. I do that when I test and develop. So in my mind, the question boils down to easier dev/ops cycles vs. user safety. The reason why this is filed for 3.0 is that it might be our last opportunity to make this change. Completely breaks automation. Automation MUST work. I see that you are voting with the devops hat on, and I do not disagree. But this is a place where breaking the automation might avoid a disaster for some poor user. One more data point, this JIRA is based on real feedback from a real large cluster. I am not apologizing for sloppy operation but trying to understand what we can do to prevent a user from making such a mistake. I am presuming (please correct me if I am wrong) that you are not objecting to the change or the intent per se, but more about the fact that we are out right refusing to format a cluster with Namenode metadata. Do you think adding a flag which says -DothisIamReallySmart address the automation concern?
        Hide
        vinayrpet Vinayakumar B added a comment -

        Don't we already have the y/n check when data exists? Why do we need another?

        Yes. We do have the prompt, which is the exact line being removed in the patch. fsImage.confirmFormat(force, isInteractive).
        User can format the existing data, if passed a -force flag or given 'y' as an answer to the prompt.

        I too wanted to understand the real need for this complete disable of format.

        If someone really wants to delete the complete fsImage, they can first delete the metadata dir

        How you can delete the shared edits dir in journal nodes manually?

        I think current behavior of format works fine. -force option should not be used too lightly.

        Show
        vinayrpet Vinayakumar B added a comment - Don't we already have the y/n check when data exists? Why do we need another? Yes. We do have the prompt, which is the exact line being removed in the patch. fsImage.confirmFormat(force, isInteractive) . User can format the existing data, if passed a -force flag or given 'y' as an answer to the prompt. I too wanted to understand the real need for this complete disable of format. If someone really wants to delete the complete fsImage, they can first delete the metadata dir How you can delete the shared edits dir in journal nodes manually? I think current behavior of format works fine. -force option should not be used too lightly.
        Hide
        jnp Jitendra Nath Pandey added a comment -

        In spite of the -force option or the prompt for Y/N, admins do make mistakes and end up loosing data. In a real production cluster with real data, why would someone want to do a format? In dev/qa clusters, I can see the need for format. Another option is to configure the cluster as "production" mode, where format will not be allowed. Dev/test clusters can be configured with 'dev' mode, where format is allowed.

        Show
        jnp Jitendra Nath Pandey added a comment - In spite of the -force option or the prompt for Y/N, admins do make mistakes and end up loosing data. In a real production cluster with real data, why would someone want to do a format? In dev/qa clusters, I can see the need for format. Another option is to configure the cluster as "production" mode, where format will not be allowed. Dev/test clusters can be configured with 'dev' mode, where format is allowed.
        Hide
        vinayrpet Vinayakumar B added a comment -

        In spite of the -force option or the prompt for Y/N, admins do make mistakes and end up loosing data. In a real production cluster with real data, why would someone want to do a format? In dev/qa clusters, I can see the need for format.

        Yes I agree, admin can make mistakes. In real cluster 'format' command (especially with -force) should be used with at-most attention(same as 'rm -r' in linux).

        Another option is to configure the cluster as "production" mode, where format will not be allowed. Dev/test clusters can be configured with 'dev' mode, where format is allowed.

        Still if you insist to disallow format in 'production' clusters, this option looks good, provided default value set to 'dev' mode to keep current 'prompt' behavior as is.

        Show
        vinayrpet Vinayakumar B added a comment - In spite of the -force option or the prompt for Y/N, admins do make mistakes and end up loosing data. In a real production cluster with real data, why would someone want to do a format? In dev/qa clusters, I can see the need for format. Yes I agree, admin can make mistakes. In real cluster 'format' command (especially with -force) should be used with at-most attention(same as 'rm -r' in linux). Another option is to configure the cluster as "production" mode, where format will not be allowed. Dev/test clusters can be configured with 'dev' mode, where format is allowed. Still if you insist to disallow format in 'production' clusters, this option looks good, provided default value set to 'dev' mode to keep current 'prompt' behavior as is.
        Hide
        arpitagarwal Arpit Agarwal added a comment - - edited

        Allen, thanks for bringing up the automation concern. We certainly don't want to break any deployment scripts. This patch will not break scripted deployment of new clusters since it eliminates the prompt completely.

        Formatting clusters with pre-existing data was a bad idea in the first place. It deletes the NameNode metadata and leaves the cluster in an unusable state since DataNodes cannot connect anymore. I don't think any existing automation can depend on this behavior since it is functionally broken.

        That said, if you have examples of automated deployments that will be broken by this change and that we haven't thought of, we can abandon the idea.

        Show
        arpitagarwal Arpit Agarwal added a comment - - edited Allen, thanks for bringing up the automation concern. We certainly don't want to break any deployment scripts. This patch will not break scripted deployment of new clusters since it eliminates the prompt completely. Formatting clusters with pre-existing data was a bad idea in the first place. It deletes the NameNode metadata and leaves the cluster in an unusable state since DataNodes cannot connect anymore. I don't think any existing automation can depend on this behavior since it is functionally broken. That said, if you have examples of automated deployments that will be broken by this change and that we haven't thought of, we can abandon the idea.
        Hide
        aw Allen Wittenauer added a comment -

        cluster owner, who was visibly distressed.

        Well sure. They screwed up. They can either own up to the fact they made a mistake and learn from it or try to push blame off onto someone or something else, like their vendor. Besides, who doesn't make a copy of the fsimage data on a regular basis? That's Hadoop Ops 101.

        That said: there comes a point where it becomes impossible to protect every admin from every mistake they may possibly make.

        -format is the functional equivalent of newfs. The argument here is the same as "newfs should fail if it detects a partition table. You'll need to dd onto the raw disk to wipe it out first". If you ask any experienced admin, 9/10 they're going to tell you that makes zero sense.

        The same thing here. The code specifically warns the user that they are about to delete live data. Could the messaging be improved? Sure and that's probably what should be happening if users are confused enough to file this drastic overreaction. But the warning is there all the same. It is up to the user to act upon that information and determine it is safe or not to continue with the operation. If they blindly -force it, well, that's on them. Users might remove data they need by always doing -skipTrash. So we should remove it, right? Of course not.

        One of the key principals of operations is that admins have enough rope to hang themselves. This is exactly the same case. In this instance, the admin did exactly that: hung themselves because they weren't careful.

        How you can delete the shared edits dir in journal nodes manually?

        I'm really glad you asked that question because it's a key one. It's sort of ridiculous to have admins go hunt down where Hadoop might be stuffing metadata. Add in the complexity of HA and it is even more ludicrous.

        That said, if you have examples of automated deployments that will be broken by this change and that we haven't thought of, we can abandon the idea.

        I have clients that do this on a regular basis. They regularly roll out small, short term clusters to external groups. Yes, this change will break them horribly.

        Show
        aw Allen Wittenauer added a comment - cluster owner, who was visibly distressed. Well sure. They screwed up. They can either own up to the fact they made a mistake and learn from it or try to push blame off onto someone or something else, like their vendor. Besides, who doesn't make a copy of the fsimage data on a regular basis? That's Hadoop Ops 101. That said: there comes a point where it becomes impossible to protect every admin from every mistake they may possibly make. -format is the functional equivalent of newfs. The argument here is the same as "newfs should fail if it detects a partition table. You'll need to dd onto the raw disk to wipe it out first". If you ask any experienced admin, 9/10 they're going to tell you that makes zero sense. The same thing here. The code specifically warns the user that they are about to delete live data. Could the messaging be improved? Sure and that's probably what should be happening if users are confused enough to file this drastic overreaction. But the warning is there all the same. It is up to the user to act upon that information and determine it is safe or not to continue with the operation. If they blindly -force it, well, that's on them. Users might remove data they need by always doing -skipTrash. So we should remove it, right? Of course not. One of the key principals of operations is that admins have enough rope to hang themselves. This is exactly the same case. In this instance, the admin did exactly that: hung themselves because they weren't careful. How you can delete the shared edits dir in journal nodes manually? I'm really glad you asked that question because it's a key one. It's sort of ridiculous to have admins go hunt down where Hadoop might be stuffing metadata. Add in the complexity of HA and it is even more ludicrous. That said, if you have examples of automated deployments that will be broken by this change and that we haven't thought of, we can abandon the idea. I have clients that do this on a regular basis. They regularly roll out small, short term clusters to external groups. Yes, this change will break them horribly.
        Hide
        ajayydv Ajay Kumar added a comment -

        Hi Allen Wittenauer, What you said is true but as Arpit Agarwal has pointed out current format functionality is broken itself. It deletes the metadata while doing nothing about the data stored in data-nodes.
        We can keep the existing functionality as it is and add a new property to identify prod cluster. By default this property will be set to non-prod. If someone marks there cluster as prod cluster than this can be an additional safeguard. This will maintain the backward compatibility and hopefully will address your concerns as well.

        Show
        ajayydv Ajay Kumar added a comment - Hi Allen Wittenauer , What you said is true but as Arpit Agarwal has pointed out current format functionality is broken itself. It deletes the metadata while doing nothing about the data stored in data-nodes. We can keep the existing functionality as it is and add a new property to identify prod cluster. By default this property will be set to non-prod. If someone marks there cluster as prod cluster than this can be an additional safeguard. This will maintain the backward compatibility and hopefully will address your concerns as well.
        Hide
        anu Anu Engineer added a comment -

        Allen Wittenauer Thanks for your comments.

        The argument here is the same as "newfs should fail if it detects a partition table. You'll need to dd onto the raw disk to wipe it out first". If you ask any experienced admin, 9/10 they're going to tell you that makes zero sense.

        Makes sense, Let us not proceed down this path. The only difference is that in case of Hadoop the damage that a command can do is multiplied by the number of data nodes.

        Having seen that accidental formats can happen, may be being able to tag a cluster as "production" like discussed above is a better idea?

        Show
        anu Anu Engineer added a comment - Allen Wittenauer Thanks for your comments. The argument here is the same as "newfs should fail if it detects a partition table. You'll need to dd onto the raw disk to wipe it out first". If you ask any experienced admin, 9/10 they're going to tell you that makes zero sense. Makes sense, Let us not proceed down this path. The only difference is that in case of Hadoop the damage that a command can do is multiplied by the number of data nodes. Having seen that accidental formats can happen, may be being able to tag a cluster as "production" like discussed above is a better idea?
        Hide
        aw Allen Wittenauer added a comment -

        current format functionality is broken itself. It deletes the metadata while doing nothing about the data stored in data-nodes.

        Just like mkfs. And just like it, the fact that it doesn't delete the actual data is a feature, not a bug. If I restore the fsimage back then my data should come back too. (mostly... new data ofc is likely to be missing, etc) It's why making a copy of the fsimage is Hadoop Ops 101.

        Some key advice I give to admins: you can try to prevent mistakes, but they'll still happen despite your best efforts. After low hanging warnings, the energy is better spent on how to quickly recover. But that's a problem that's outside of the core code.

        For the record, yes, I've made HUGE mistakes like this in my career. Every admin has. In my case, I brought down an entire hospital once. Even with that experience, I still think requiring metadata deletion outside of the tool set is waaaaay overkill.

        may be being able to tag a cluster as "production" like discussed above is a better idea?

        Yeah, sure, whatever. All that's going to happen is:

        hdfs --config /tmp/mymodifiedconfig namenode -format -force
        

        If a user is too lazy/impatient/distracted to check that they are on a live system before hitting y, they'll just change the flag and then format. But if that makes folks happy, fine. It still sounds like the console output needs some work though if a user couldn't "see" it. (Not sure I agree with that either, but whatever.)

        BTW, a quick search for how the equivalent problem is solved in databases is interesting. Almost all of them that I looked at: don't give the user access. So yes, enough rope to hang themselves seems to be the expectation operationally.

        Show
        aw Allen Wittenauer added a comment - current format functionality is broken itself. It deletes the metadata while doing nothing about the data stored in data-nodes. Just like mkfs. And just like it, the fact that it doesn't delete the actual data is a feature, not a bug. If I restore the fsimage back then my data should come back too. (mostly... new data ofc is likely to be missing, etc) It's why making a copy of the fsimage is Hadoop Ops 101. Some key advice I give to admins: you can try to prevent mistakes, but they'll still happen despite your best efforts. After low hanging warnings, the energy is better spent on how to quickly recover. But that's a problem that's outside of the core code. For the record, yes, I've made HUGE mistakes like this in my career. Every admin has. In my case, I brought down an entire hospital once. Even with that experience, I still think requiring metadata deletion outside of the tool set is waaaaay overkill. may be being able to tag a cluster as "production" like discussed above is a better idea? Yeah, sure, whatever. All that's going to happen is: hdfs --config /tmp/mymodifiedconfig namenode -format -force If a user is too lazy/impatient/distracted to check that they are on a live system before hitting y, they'll just change the flag and then format. But if that makes folks happy, fine. It still sounds like the console output needs some work though if a user couldn't "see" it. (Not sure I agree with that either, but whatever.) BTW, a quick search for how the equivalent problem is solved in databases is interesting. Almost all of them that I looked at: don't give the user access. So yes, enough rope to hang themselves seems to be the expectation operationally.
        Hide
        ajayydv Ajay Kumar added a comment -

        Jitendra Nath Pandey,Vinayakumar B, thanks for suggestion about prod/non-prod. Allen Wittenauer,Anu Engineer,Arpit Agarwal thanks for valuable feedback. I have updated the patch to include a new property to identify if cluster is marked as prod. By default property value is false and existing functionality will continue.

        Show
        ajayydv Ajay Kumar added a comment - Jitendra Nath Pandey , Vinayakumar B , thanks for suggestion about prod/non-prod. Allen Wittenauer , Anu Engineer , Arpit Agarwal thanks for valuable feedback. I have updated the patch to include a new property to identify if cluster is marked as prod. By default property value is false and existing functionality will continue.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 18s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
              trunk Compile Tests
        +1 mvninstall 14m 9s trunk passed
        +1 compile 0m 46s trunk passed
        +1 checkstyle 0m 41s trunk passed
        +1 mvnsite 0m 52s trunk passed
        +1 findbugs 1m 36s trunk passed
        +1 javadoc 0m 39s trunk passed
              Patch Compile Tests
        +1 mvninstall 0m 48s the patch passed
        +1 compile 0m 43s the patch passed
        +1 javac 0m 43s the patch passed
        -0 checkstyle 0m 36s hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 616 unchanged - 1 fixed = 622 total (was 617)
        +1 mvnsite 0m 51s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 1s The patch has no ill-formed XML file.
        +1 findbugs 1m 44s the patch passed
        +1 javadoc 0m 38s the patch passed
              Other Tests
        -1 unit 88m 10s hadoop-hdfs in the patch failed.
        +1 asflicense 0m 14s The patch does not generate ASF License warnings.
        114m 0s



        Reason Tests
        Failed junit tests hadoop.hdfs.server.namenode.TestGenericJournalConf
          hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
          hadoop.hdfs.qjournal.server.TestJournalNodeSync
          hadoop.hdfs.qjournal.TestNNWithQJM
          hadoop.hdfs.server.namenode.TestReencryptionWithKMS
          hadoop.hdfs.TestRollingUpgrade
          hadoop.hdfs.TestRollingUpgradeRollback
          hadoop.hdfs.TestDFSInotifyEventInputStream
          hadoop.hdfs.TestLeaseRecoveryStriped
          hadoop.hdfs.qjournal.TestSecureNNWithQJM
          hadoop.hdfs.server.namenode.TestDecommissioningStatus
          hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
          hadoop.hdfs.TestCrcCorruption
          hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
          hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
          hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain
          hadoop.hdfs.tools.TestDFSAdminWithHA
          hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
          hadoop.hdfs.web.TestWebHdfsTimeouts
          hadoop.hdfs.TestWriteReadStripedFile



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:71bbb86
        JIRA Issue HDFS-12420
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12886992/HDFS-12420.03.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
        uname Linux 7a5daedd93e7 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 73aed34
        Default Java 1.8.0_144
        findbugs v3.1.0-RC1
        checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/21129/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
        unit https://builds.apache.org/job/PreCommit-HDFS-Build/21129/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21129/testReport/
        modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
        Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21129/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.       trunk Compile Tests +1 mvninstall 14m 9s trunk passed +1 compile 0m 46s trunk passed +1 checkstyle 0m 41s trunk passed +1 mvnsite 0m 52s trunk passed +1 findbugs 1m 36s trunk passed +1 javadoc 0m 39s trunk passed       Patch Compile Tests +1 mvninstall 0m 48s the patch passed +1 compile 0m 43s the patch passed +1 javac 0m 43s the patch passed -0 checkstyle 0m 36s hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 616 unchanged - 1 fixed = 622 total (was 617) +1 mvnsite 0m 51s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 1m 44s the patch passed +1 javadoc 0m 38s the patch passed       Other Tests -1 unit 88m 10s hadoop-hdfs in the patch failed. +1 asflicense 0m 14s The patch does not generate ASF License warnings. 114m 0s Reason Tests Failed junit tests hadoop.hdfs.server.namenode.TestGenericJournalConf   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA   hadoop.hdfs.qjournal.server.TestJournalNodeSync   hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.server.namenode.TestReencryptionWithKMS   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.TestRollingUpgradeRollback   hadoop.hdfs.TestDFSInotifyEventInputStream   hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.qjournal.TestSecureNNWithQJM   hadoop.hdfs.server.namenode.TestDecommissioningStatus   hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail   hadoop.hdfs.TestCrcCorruption   hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain   hadoop.hdfs.tools.TestDFSAdminWithHA   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits   hadoop.hdfs.web.TestWebHdfsTimeouts   hadoop.hdfs.TestWriteReadStripedFile Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-12420 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12886992/HDFS-12420.03.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 7a5daedd93e7 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 73aed34 Default Java 1.8.0_144 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/21129/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/21129/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21129/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21129/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        ajayydv Ajay Kumar added a comment -

        Fixed style-check and unit test errors.

        Show
        ajayydv Ajay Kumar added a comment - Fixed style-check and unit test errors.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 41s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              trunk Compile Tests
        +1 mvninstall 15m 49s trunk passed
        +1 compile 1m 0s trunk passed
        +1 checkstyle 0m 46s trunk passed
        +1 mvnsite 1m 8s trunk passed
        +1 findbugs 1m 55s trunk passed
        +1 javadoc 0m 43s trunk passed
              Patch Compile Tests
        +1 mvninstall 1m 1s the patch passed
        +1 compile 0m 57s the patch passed
        +1 javac 0m 57s the patch passed
        +1 checkstyle 0m 42s the patch passed
        +1 mvnsite 1m 2s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 1s The patch has no ill-formed XML file.
        +1 findbugs 2m 4s the patch passed
        +1 javadoc 0m 41s the patch passed
              Other Tests
        -1 unit 128m 14s hadoop-hdfs in the patch failed.
        +1 asflicense 0m 20s The patch does not generate ASF License warnings.
        158m 35s



        Reason Tests
        Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
          hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
          hadoop.hdfs.web.TestWebHDFSXAttr
          hadoop.hdfs.TestLeaseRecoveryStriped
          hadoop.hdfs.server.balancer.TestBalancer
          hadoop.hdfs.TestReconstructStripedFile
          hadoop.hdfs.web.TestWebHDFSAcl
          hadoop.hdfs.web.TestHttpsFileSystem
          hadoop.hdfs.TestReadStripedFileWithMissingBlocks
          hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
          hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
          hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
          hadoop.hdfs.server.namenode.TestDecommissioningStatus
          hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain
        Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:71bbb86
        JIRA Issue HDFS-12420
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12887028/HDFS-12420.04.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
        uname Linux 4543e7d6009c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / e0b3c64
        Default Java 1.8.0_144
        findbugs v3.1.0-RC1
        unit https://builds.apache.org/job/PreCommit-HDFS-Build/21133/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21133/testReport/
        modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
        Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21133/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 41s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 15m 49s trunk passed +1 compile 1m 0s trunk passed +1 checkstyle 0m 46s trunk passed +1 mvnsite 1m 8s trunk passed +1 findbugs 1m 55s trunk passed +1 javadoc 0m 43s trunk passed       Patch Compile Tests +1 mvninstall 1m 1s the patch passed +1 compile 0m 57s the patch passed +1 javac 0m 57s the patch passed +1 checkstyle 0m 42s the patch passed +1 mvnsite 1m 2s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 2m 4s the patch passed +1 javadoc 0m 41s the patch passed       Other Tests -1 unit 128m 14s hadoop-hdfs in the patch failed. +1 asflicense 0m 20s The patch does not generate ASF License warnings. 158m 35s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery   hadoop.hdfs.web.TestWebHDFSXAttr   hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.server.balancer.TestBalancer   hadoop.hdfs.TestReconstructStripedFile   hadoop.hdfs.web.TestWebHDFSAcl   hadoop.hdfs.web.TestHttpsFileSystem   hadoop.hdfs.TestReadStripedFileWithMissingBlocks   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped   hadoop.hdfs.server.namenode.TestDecommissioningStatus   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-12420 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12887028/HDFS-12420.04.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 4543e7d6009c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e0b3c64 Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HDFS-Build/21133/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21133/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21133/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        ajayydv Ajay Kumar added a comment -

        Failed tests seems unrelated. Below two tests fail irrespective of patch.
        TestNameNodeMetrics
        TestLeaseRecoveryStriped

        All other tests passed when i tested them locally.

        Show
        ajayydv Ajay Kumar added a comment - Failed tests seems unrelated. Below two tests fail irrespective of patch. TestNameNodeMetrics TestLeaseRecoveryStriped All other tests passed when i tested them locally.
        Hide
        arpitagarwal Arpit Agarwal added a comment -

        Thanks Ajay Kumar. I liked the idea of disallowing format completely but this approach should address Vinay and Allen's concerns.

        Few comments on the patch:

        1. The following can be replaced with conf.getBoolean
          conf.get(DFSConfigKeys.DFS_CLUSTER_IS_PROD, DFSConfigKeys
                    .DFS_CLUSTER_IS_PROD_DEFAULT).equalsIgnoreCase("true")
          
        2. We should set force and isInteractive to false if its a prod cluster to remove any possibility of deleting data. Also you can log a warning saying ... prod cluster. Ignoring the --force and --nonInteractive flags.
        3. Move the new property to core-site/CommonConfigurationKeysPublic. Also you could consider renaming it to something like hadoop.is.prod.cluster.
        Show
        arpitagarwal Arpit Agarwal added a comment - Thanks Ajay Kumar . I liked the idea of disallowing format completely but this approach should address Vinay and Allen's concerns. Few comments on the patch: The following can be replaced with conf.getBoolean conf.get(DFSConfigKeys.DFS_CLUSTER_IS_PROD, DFSConfigKeys .DFS_CLUSTER_IS_PROD_DEFAULT).equalsIgnoreCase( " true " ) We should set force and isInteractive to false if its a prod cluster to remove any possibility of deleting data. Also you can log a warning saying ... prod cluster. Ignoring the --force and --nonInteractive flags. Move the new property to core-site/CommonConfigurationKeysPublic. Also you could consider renaming it to something like hadoop.is.prod.cluster .
        Hide
        anu Anu Engineer added a comment -

        +1, from me. I am not going to commit this till Monday (09/18/2017) so that all other people in this thread gets a chance to comment.

        Show
        anu Anu Engineer added a comment - +1, from me. I am not going to commit this till Monday (09/18/2017) so that all other people in this thread gets a chance to comment.
        Hide
        ajayydv Ajay Kumar added a comment -

        Arpit Agarwal,Anu Engineer thanks for review, attaching new patch for review suggestions.

        Show
        ajayydv Ajay Kumar added a comment - Arpit Agarwal , Anu Engineer thanks for review, attaching new patch for review suggestions.
        Hide
        shahrs87 Rushabh S Shah added a comment -

        Couple of minor comments from me.
        1.

        + public static final String HADOOP_CLUSTER_IS_PROD = "hadoop.is.prod.cluster";

        This is just a namenode side config. We should prefix it as dfs.namenode.is.prod.cluster.
        The equivalent member variable name should be DFS_NAMENODE_IS_PROD_CLUSTER.

        2.
        The config key should de defined in hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java and not CommonConfigurationKeys.java
        Also the default value should be defined in hdfs-default.xml instead of core-default.xml

        Other than that the patch looks good to me.

        Show
        shahrs87 Rushabh S Shah added a comment - Couple of minor comments from me. 1. + public static final String HADOOP_CLUSTER_IS_PROD = "hadoop.is.prod.cluster"; This is just a namenode side config. We should prefix it as dfs.namenode.is.prod.cluster . The equivalent member variable name should be DFS_NAMENODE_IS_PROD_CLUSTER . 2. The config key should de defined in hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java and not CommonConfigurationKeys.java Also the default value should be defined in hdfs-default.xml instead of core-default.xml Other than that the patch looks good to me.
        Hide
        daryn Daryn Sharp added a comment -

        I'll preface with I think this is inane. This is like linux having a sysctl controlling whether you can use rm -r.

        That said, if we really are going to make this change:

        • Clarify in the interactive message that all data will be destroyed. I see the patch does that and I think it should be sufficient. But...
        • If there's a conf, "hadoop.is.prod.cluster" is silly. Call it something more like "hdfs.reformat.enabled".
        • Instead of a conf, maybe consider the presence of some file in the metadata directory as the protection.
        • I'd really rather allow -force work otherwise it's not really a force. If you run a "format" command with a "-force" then you made a huge conscious decision to wipe your NN. You can't typo that.
        Show
        daryn Daryn Sharp added a comment - I'll preface with I think this is inane. This is like linux having a sysctl controlling whether you can use rm -r. That said, if we really are going to make this change: Clarify in the interactive message that all data will be destroyed. I see the patch does that and I think it should be sufficient. But... If there's a conf, "hadoop.is.prod.cluster" is silly. Call it something more like "hdfs.reformat.enabled". Instead of a conf, maybe consider the presence of some file in the metadata directory as the protection. I'd really rather allow -force work otherwise it's not really a force. If you run a "format" command with a "-force" then you made a huge conscious decision to wipe your NN. You can't typo that.
        Hide
        kihwal Kihwal Lee added a comment -

        Adding an extra layer of protection from mistakes is fine, but we can't assume that prod implies no reformat ever. Don't force people to reclassify their cluster for a command.

        If there's a conf, "hadoop.is.prod.cluster" is silly. Call it something more like "hdfs.reformat.enabled".

        Something like this will be better. Also, the default behavior should be backward compatible.

        Show
        kihwal Kihwal Lee added a comment - Adding an extra layer of protection from mistakes is fine, but we can't assume that prod implies no reformat ever. Don't force people to reclassify their cluster for a command. If there's a conf, "hadoop.is.prod.cluster" is silly. Call it something more like "hdfs.reformat.enabled". Something like this will be better. Also, the default behavior should be backward compatible.
        Hide
        ajayydv Ajay Kumar added a comment - - edited

        Rushabh S Shah, Daryn Sharp,Kihwal Lee Thanks for review and feedback. I have changed property to "hdfs.reformat.enabled"=true
        and moved it to DFSConfigKeys. By default property is set to true, which will ensure existing behavior continues unless someone disables reformat.

        Show
        ajayydv Ajay Kumar added a comment - - edited Rushabh S Shah , Daryn Sharp , Kihwal Lee Thanks for review and feedback. I have changed property to "hdfs.reformat.enabled"=true and moved it to DFSConfigKeys. By default property is set to true, which will ensure existing behavior continues unless someone disables reformat.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 19s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              trunk Compile Tests
        +1 mvninstall 12m 45s trunk passed
        +1 compile 0m 44s trunk passed
        +1 checkstyle 0m 34s trunk passed
        +1 mvnsite 0m 47s trunk passed
        +1 findbugs 1m 36s trunk passed
        +1 javadoc 0m 39s trunk passed
              Patch Compile Tests
        -1 mvninstall 0m 26s hadoop-hdfs in the patch failed.
        -1 compile 0m 26s hadoop-hdfs in the patch failed.
        -1 javac 0m 26s hadoop-hdfs in the patch failed.
        -0 checkstyle 0m 37s hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 525 unchanged - 0 fixed = 527 total (was 525)
        -1 mvnsite 0m 26s hadoop-hdfs in the patch failed.
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 1s The patch has no ill-formed XML file.
        -1 findbugs 0m 25s hadoop-hdfs in the patch failed.
        +1 javadoc 0m 36s the patch passed
              Other Tests
        -1 unit 0m 25s hadoop-hdfs in the patch failed.
        +1 asflicense 0m 12s The patch does not generate ASF License warnings.
        22m 10s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:71bbb86
        JIRA Issue HDFS-12420
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12887405/HDFS-12420.06.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
        uname Linux ffe77f5f7478 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / de197fc
        Default Java 1.8.0_144
        findbugs v3.1.0-RC1
        mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
        compile https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
        javac https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
        checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
        mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
        findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
        unit https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21169/testReport/
        modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
        Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21169/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 12m 45s trunk passed +1 compile 0m 44s trunk passed +1 checkstyle 0m 34s trunk passed +1 mvnsite 0m 47s trunk passed +1 findbugs 1m 36s trunk passed +1 javadoc 0m 39s trunk passed       Patch Compile Tests -1 mvninstall 0m 26s hadoop-hdfs in the patch failed. -1 compile 0m 26s hadoop-hdfs in the patch failed. -1 javac 0m 26s hadoop-hdfs in the patch failed. -0 checkstyle 0m 37s hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 525 unchanged - 0 fixed = 527 total (was 525) -1 mvnsite 0m 26s hadoop-hdfs in the patch failed. +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. -1 findbugs 0m 25s hadoop-hdfs in the patch failed. +1 javadoc 0m 36s the patch passed       Other Tests -1 unit 0m 25s hadoop-hdfs in the patch failed. +1 asflicense 0m 12s The patch does not generate ASF License warnings. 22m 10s Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-12420 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12887405/HDFS-12420.06.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux ffe77f5f7478 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / de197fc Default Java 1.8.0_144 findbugs v3.1.0-RC1 mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt compile https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt javac https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/21169/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21169/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21169/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        ajayydv Ajay Kumar added a comment - - edited

        fixed jenkins issues.

        Show
        ajayydv Ajay Kumar added a comment - - edited fixed jenkins issues.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 32s Docker mode activated.
              Prechecks
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
              trunk Compile Tests
        +1 mvninstall 15m 30s trunk passed
        +1 compile 0m 58s trunk passed
        +1 checkstyle 0m 48s trunk passed
        +1 mvnsite 0m 58s trunk passed
        +1 findbugs 1m 44s trunk passed
        +1 javadoc 0m 43s trunk passed
              Patch Compile Tests
        +1 mvninstall 0m 54s the patch passed
        +1 compile 0m 49s the patch passed
        +1 javac 0m 49s the patch passed
        +1 checkstyle 0m 40s the patch passed
        +1 mvnsite 0m 58s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 1s The patch has no ill-formed XML file.
        +1 findbugs 2m 2s the patch passed
        +1 javadoc 0m 40s the patch passed
              Other Tests
        -1 unit 97m 47s hadoop-hdfs in the patch failed.
        +1 asflicense 0m 20s The patch does not generate ASF License warnings.
        127m 8s



        Reason Tests
        Failed junit tests hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
          hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
          hadoop.hdfs.server.namenode.TestReencryptionWithKMS
          hadoop.hdfs.server.namenode.TestNamenodeRetryCache
          hadoop.hdfs.TestLeaseRecoveryStriped
          hadoop.hdfs.server.namenode.TestUpgradeDomainBlockPlacementPolicy
          hadoop.hdfs.server.datanode.TestDirectoryScanner
          hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
          hadoop.hdfs.TestReconstructStripedFile
          hadoop.hdfs.TestLease
        Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:71bbb86
        JIRA Issue HDFS-12420
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12887422/HDFS-12420.07.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
        uname Linux 0d5f3a0f8e6f 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / b9b607d
        Default Java 1.8.0_144
        findbugs v3.1.0-RC1
        unit https://builds.apache.org/job/PreCommit-HDFS-Build/21172/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21172/testReport/
        modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
        Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21172/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 32s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 15m 30s trunk passed +1 compile 0m 58s trunk passed +1 checkstyle 0m 48s trunk passed +1 mvnsite 0m 58s trunk passed +1 findbugs 1m 44s trunk passed +1 javadoc 0m 43s trunk passed       Patch Compile Tests +1 mvninstall 0m 54s the patch passed +1 compile 0m 49s the patch passed +1 javac 0m 49s the patch passed +1 checkstyle 0m 40s the patch passed +1 mvnsite 0m 58s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 2m 2s the patch passed +1 javadoc 0m 40s the patch passed       Other Tests -1 unit 97m 47s hadoop-hdfs in the patch failed. +1 asflicense 0m 20s The patch does not generate ASF License warnings. 127m 8s Reason Tests Failed junit tests hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.server.namenode.TestReencryptionWithKMS   hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.server.namenode.TestUpgradeDomainBlockPlacementPolicy   hadoop.hdfs.server.datanode.TestDirectoryScanner   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.TestReconstructStripedFile   hadoop.hdfs.TestLease Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-12420 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12887422/HDFS-12420.07.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 0d5f3a0f8e6f 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / b9b607d Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HDFS-Build/21172/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21172/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21172/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        ajayydv Ajay Kumar added a comment -

        Below test cases are failing irrespective of patch. Rest of them are passing locally.
        TestNamenodeRetryCache
        TestRetryCacheWithHA
        TestNameNodeMetrics

        Show
        ajayydv Ajay Kumar added a comment - Below test cases are failing irrespective of patch. Rest of them are passing locally. TestNamenodeRetryCache TestRetryCacheWithHA TestNameNodeMetrics

          People

          • Assignee:
            ajayydv Ajay Kumar
            Reporter:
            ajayydv Ajay Kumar
          • Votes:
            0 Vote for this issue
            Watchers:
            13 Start watching this issue

            Dates

            • Created:
              Updated:

              Development