Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-4832

Namenode doesn't change the number of missing blocks in safemode when DNs rejoin or leave

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 3.0.0, 0.23.7, 2.1.0-beta
    • Fix Version/s: 3.0.0, 2.1.0-beta, 0.23.9
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      This change makes name node keep its internal replication queues and data node state updated in manual safe mode. This allows metrics and UI to present up-to-date information while in safe mode. The behavior during start-up safe mode is unchanged.

      Description

      Courtesy Karri VRK Reddy!

      1. Namenode lost datanodes causing missing blocks
      2. Namenode was put in safe mode
      3. Datanode restarted on dead nodes
      4. Waited for lots of time for the NN UI to reflect the recovered blocks.
      5. Forced NN out of safe mode and suddenly, no more missing blocks anymore.

      I was able to replicate this on 0.23 and trunk. I set dfs.namenode.heartbeat.recheck-interval to 1 and killed the DN to simulate "lost" datanode. The opposite case also has problems (i.e. Datanode failing when NN is in safemode, doesn't lead to a missing blocks message)

      Without the NN updating this list of missing blocks, the grid admins will not know when to take the cluster out of safemode.

      1. HDFS-4832.branch-0.23.patch
        7 kB
        Ravi Prakash
      2. HDFS-4832.patch
        8 kB
        Ravi Prakash
      3. HDFS-4832.patch
        8 kB
        Ravi Prakash
      4. HDFS-4832.patch
        7 kB
        Ravi Prakash
      5. HDFS-4832.patch
        7 kB
        Ravi Prakash
      6. HDFS-4832.patch
        7 kB
        Ravi Prakash
      7. HDFS-4832.patch
        0.8 kB
        Ravi Prakash

        Issue Links

          Activity

          Hide
          Ravi Prakash added a comment -

          BlockManager:addStoredBlock was avoiding handling over/under-replicated blocks during safemode. This patch makes it avoid that only during startup safemode.
          Simple half line fix. Can someone please review it?

          Show
          Ravi Prakash added a comment - BlockManager:addStoredBlock was avoiding handling over/under-replicated blocks during safemode. This patch makes it avoid that only during startup safemode. Simple half line fix. Can someone please review it?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12583573/HDFS-4832.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestDFSShell

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4409//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4409//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12583573/HDFS-4832.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestDFSShell +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4409//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4409//console This message is automatically generated.
          Hide
          Kihwal Lee added a comment -

          SBN also skips processing of over/under replicated blocks. The new condition in your patch will change SBN's behavior.

          There is another aspect of this issue. Since neededReplications is not scanned in safe mode and on SBN, orphaned blocks in there cause problems during metaSave(). They normally go away when ReplicationMonitor generates DN work, but since it doesn't happen while in these modes, those blocks can linger. When metaSave() hits one of these blocks, it dies with NPE because there is no corresponding INodeFile.

          Show
          Kihwal Lee added a comment - SBN also skips processing of over/under replicated blocks. The new condition in your patch will change SBN's behavior. There is another aspect of this issue. Since neededReplications is not scanned in safe mode and on SBN, orphaned blocks in there cause problems during metaSave() . They normally go away when ReplicationMonitor generates DN work, but since it doesn't happen while in these modes, those blocks can linger. When metaSave() hits one of these blocks, it dies with NPE because there is no corresponding INodeFile .
          Hide
          Kihwal Lee added a comment -

          Since neededReplications is not scanned in safe mode and on SBN ...

          This is true, but it is not a problem on SBN. SBN can have blocks from future, so it is natural to get reports on blocks that look like orphaned. Also it does not serve normal requests. The problem is when orphaned blocks are in neededReplications on an active node in safe mode.

          According to what we have seen in clusters, combination of forcing safe mode, deletions and DN restart can make it happen.

          Show
          Kihwal Lee added a comment - Since neededReplications is not scanned in safe mode and on SBN ... This is true, but it is not a problem on SBN. SBN can have blocks from future, so it is natural to get reports on blocks that look like orphaned. Also it does not serve normal requests. The problem is when orphaned blocks are in neededReplications on an active node in safe mode. According to what we have seen in clusters, combination of forcing safe mode, deletions and DN restart can make it happen.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12584570/HDFS-4832.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4430//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4430//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12584570/HDFS-4832.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4430//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4430//console This message is automatically generated.
          Hide
          Konstantin Shvachko added a comment -

          Ravi. It is OK to maintain replication queues during manual safeMode. But you should not send replication and deletion commands back to DataNodes on heartbeat. This defeats the purpose of SafeMode, which should guarantee that blocks are not replicated and not deleted.

          Show
          Konstantin Shvachko added a comment - Ravi. It is OK to maintain replication queues during manual safeMode. But you should not send replication and deletion commands back to DataNodes on heartbeat. This defeats the purpose of SafeMode, which should guarantee that blocks are not replicated and not deleted.
          Hide
          Ravi Prakash added a comment -

          Thanks for the review Konstantin. Yes. I agree. This is why I have put this in the DatanodeManager to make sure that none of the replication /deletion work is sent back the the DNs.

          +        // If we are in safemode, do not send back any recovery / replication
          +        // requests
          +        if(namesystem.isInSafeMode()) {
          +          return new DatanodeCommand[0];
          +        }
          
          Show
          Ravi Prakash added a comment - Thanks for the review Konstantin. Yes. I agree. This is why I have put this in the DatanodeManager to make sure that none of the replication /deletion work is sent back the the DNs. + // If we are in safemode, do not send back any recovery / replication + // requests + if (namesystem.isInSafeMode()) { + return new DatanodeCommand[0]; + }
          Hide
          Kihwal Lee added a comment -

          Here are some comments:

          • The condition for figuring non-initial safe mode: HA state is already checked in the namenode method, so you don't have to check.
          • isInStartupSafeMode() returns true for any auto safe mode. E.g. if the resource checker puts NN in safe mode, it will return true.
          • The existing code drained scheduled work in safe mode, but the patch makes it immediately stops sending scheduled work to DNs. This seems correct behavior for safe mode, but those work can be sent out after leaving safe mode. That may not be ideal. E.g. if NN is suffering from a flaky DNS, DNs will appear dead, come back and dead again, generating a lot of invalidation and replication work. Admins may put NN in safe mode to safely pass the storm. When they do, the unnecessary work needs to stop rather than being delayed. Please make sure unintended damage does not occur after leaving safe mode.
          Show
          Kihwal Lee added a comment - Here are some comments: The condition for figuring non-initial safe mode: HA state is already checked in the namenode method, so you don't have to check. isInStartupSafeMode() returns true for any auto safe mode. E.g. if the resource checker puts NN in safe mode, it will return true. The existing code drained scheduled work in safe mode, but the patch makes it immediately stops sending scheduled work to DNs. This seems correct behavior for safe mode, but those work can be sent out after leaving safe mode. That may not be ideal. E.g. if NN is suffering from a flaky DNS, DNs will appear dead, come back and dead again, generating a lot of invalidation and replication work. Admins may put NN in safe mode to safely pass the storm. When they do, the unnecessary work needs to stop rather than being delayed. Please make sure unintended damage does not occur after leaving safe mode.
          Hide
          Ravi Prakash added a comment -

          Thanks for your review Kihwal. I've updated the patch.

          isInStartupSafeMode() returns true for any auto safe mode. E.g. if the resource checker puts NN in safe mode, it will return true.

          I have filed HDFS-4862 to fix this. The method name is unfortunately contrary to its behavior.

          The existing code drained scheduled work in safe mode, but the patch makes it immediately stops sending scheduled work to DNs. This seems correct behavior for safe mode, but those work can be sent out after leaving safe mode. That may not be ideal. E.g. if NN is suffering from a flaky DNS, DNs will appear dead, come back and dead again, generating a lot of invalidation and replication work. Admins may put NN in safe mode to safely pass the storm. When they do, the unnecessary work needs to stop rather than being delayed. Please make sure unintended damage does not occur after leaving safe mode.

          UnderReplicatedBlocks is the priority queue maintained for neededReplications, and it is updated when nodes join or are marked dead. However, once BlockManager.computeReplicationWorkForBlocks is called, the ReplicationWork is transferred to the DatanodeDescriptor's replicateBlocks queue, from which it will not be rescinded. The computeReplicationWorkForBlocks() is called every replicationRecheckInterval which defaults to 3 seconds. Can we please handle this in a separate JIRA?

          Show
          Ravi Prakash added a comment - Thanks for your review Kihwal. I've updated the patch. isInStartupSafeMode() returns true for any auto safe mode. E.g. if the resource checker puts NN in safe mode, it will return true. I have filed HDFS-4862 to fix this. The method name is unfortunately contrary to its behavior. The existing code drained scheduled work in safe mode, but the patch makes it immediately stops sending scheduled work to DNs. This seems correct behavior for safe mode, but those work can be sent out after leaving safe mode. That may not be ideal. E.g. if NN is suffering from a flaky DNS, DNs will appear dead, come back and dead again, generating a lot of invalidation and replication work. Admins may put NN in safe mode to safely pass the storm. When they do, the unnecessary work needs to stop rather than being delayed. Please make sure unintended damage does not occur after leaving safe mode. UnderReplicatedBlocks is the priority queue maintained for neededReplications, and it is updated when nodes join or are marked dead. However, once BlockManager.computeReplicationWorkForBlocks is called, the ReplicationWork is transferred to the DatanodeDescriptor's replicateBlocks queue, from which it will not be rescinded. The computeReplicationWorkForBlocks() is called every replicationRecheckInterval which defaults to 3 seconds. Can we please handle this in a separate JIRA?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12585143/HDFS-4832.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4446//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4446//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12585143/HDFS-4832.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4446//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4446//console This message is automatically generated.
          Hide
          Ravi Prakash added a comment -

          Hmm.... funny. Eclipse ran the test fine and passed, but the same test failed when run from the command line.

          Anyway. I've fixed the test so it passes both, in eclipse as well as on the command line

          Show
          Ravi Prakash added a comment - Hmm.... funny. Eclipse ran the test fine and passed, but the same test failed when run from the command line. Anyway. I've fixed the test so it passes both, in eclipse as well as on the command line
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12585279/HDFS-4832.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4454//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4454//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12585279/HDFS-4832.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4454//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4454//console This message is automatically generated.
          Hide
          Kihwal Lee added a comment -

          If I am reading the existing code correctly, if a node registers and then dies after sending in a block report in a startup safe mode, the missing block count won't be decremented? Then if whole bunch of datanodes crash before leaving the safe mode (e.g. during initialization of repl queues), namenode will still think it's good to go. But once it gets out of safe mode, it will quickly notice a lot of under-replicated and missing blocks.

          Do we ever want this behavior? Is this inconsistency justified for shorter duration of startup safe mode? If we fix this (still no actual work will be generated), how much overhead will it introduce?

          Startup safe mode is special because the repl queues have never initialized. But in any subsequent safe mode, auto or manual, repl queues are there. May be it is cheaper to keep it updated and reduce the amount of work for reinitializing the queues.

          Show
          Kihwal Lee added a comment - If I am reading the existing code correctly, if a node registers and then dies after sending in a block report in a startup safe mode, the missing block count won't be decremented? Then if whole bunch of datanodes crash before leaving the safe mode (e.g. during initialization of repl queues), namenode will still think it's good to go. But once it gets out of safe mode, it will quickly notice a lot of under-replicated and missing blocks. Do we ever want this behavior? Is this inconsistency justified for shorter duration of startup safe mode? If we fix this (still no actual work will be generated), how much overhead will it introduce? Startup safe mode is special because the repl queues have never initialized. But in any subsequent safe mode, auto or manual, repl queues are there. May be it is cheaper to keep it updated and reduce the amount of work for reinitializing the queues.
          Hide
          Ravi Prakash added a comment -

          The computeReplicationWorkForBlocks() is called every replicationRecheckInterval which defaults to 3 seconds. Can we please handle this in a separate JIRA?

          To correct myself, no more replication / invalidation work will be scheduled in safemode because computeDatanodeWork()'s very first statement checks for safemode. So we don't need any new JIRA.

          Show
          Ravi Prakash added a comment - The computeReplicationWorkForBlocks() is called every replicationRecheckInterval which defaults to 3 seconds. Can we please handle this in a separate JIRA? To correct myself, no more replication / invalidation work will be scheduled in safemode because computeDatanodeWork()'s very first statement checks for safemode. So we don't need any new JIRA.
          Hide
          Ravi Prakash added a comment -

          The patch passed test-patch.sh on my machine several times. Rolling the dice again.

          Show
          Ravi Prakash added a comment - The patch passed test-patch.sh on my machine several times. Rolling the dice again.
          Hide
          Ravi Prakash added a comment -

          Hi Kihwal, that change was made in https://issues.apache.org/jira/browse/HDFS-1295 . Matt reports some statistics there. Please let me know if its worthwhile to take that performance hit to report the correct block status.

          Show
          Ravi Prakash added a comment - Hi Kihwal, that change was made in https://issues.apache.org/jira/browse/HDFS-1295 . Matt reports some statistics there. Please let me know if its worthwhile to take that performance hit to report the correct block status.
          Hide
          Kihwal Lee added a comment -

          +1 to the approach. This patch stops generation of new work and sending of remaining work. Since replications queues are kept updated in manual safe mode, it is okay to skip reinitialization of repl queues when exiting manual safe mode. HA is fine with this change; when SBN transitions to active, the queues are cleared and initialized unless the NN is in startup safe mode, in which case repl queues are initialized later when exiting safe mode.

          Show
          Kihwal Lee added a comment - +1 to the approach. This patch stops generation of new work and sending of remaining work. Since replications queues are kept updated in manual safe mode, it is okay to skip reinitialization of repl queues when exiting manual safe mode. HA is fine with this change; when SBN transitions to active, the queues are cleared and initialized unless the NN is in startup safe mode, in which case repl queues are initialized later when exiting safe mode.
          Hide
          Kihwal Lee added a comment -

          Please let me know if its worthwhile to take that performance hit to report the correct block status.

          NN will end up doing more work than before this change in manual safe mode, but won't have to reinitialize the repl queues, which can take a very long time. Startup safe mode is not affected.

          Show
          Kihwal Lee added a comment - Please let me know if its worthwhile to take that performance hit to report the correct block status. NN will end up doing more work than before this change in manual safe mode, but won't have to reinitialize the repl queues, which can take a very long time. Startup safe mode is not affected.
          Hide
          Ravi Prakash added a comment -

          The patch ported to trunk

          Show
          Ravi Prakash added a comment - The patch ported to trunk
          Hide
          Ravi Prakash added a comment -

          Oops! I mean the patch ported to branch-0.23

          Show
          Ravi Prakash added a comment - Oops! I mean the patch ported to branch-0.23
          Hide
          Ravi Prakash added a comment -

          The patch for trunk and branch-2

          Show
          Ravi Prakash added a comment - The patch for trunk and branch-2
          Hide
          Ravi Prakash added a comment -

          Y u no test my patch Hadoop QA?

          Uploading the same patch. Maybe this time it will get picked up

          Show
          Ravi Prakash added a comment - Y u no test my patch Hadoop QA? Uploading the same patch. Maybe this time it will get picked up
          Hide
          Kihwal Lee added a comment -

          Your precommit build is running right now. https://builds.apache.org/job/PreCommit-HDFS-Build/449

          Show
          Kihwal Lee added a comment - Your precommit build is running right now. https://builds.apache.org/job/PreCommit-HDFS-Build/449
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12586729/HDFS-4832.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4498//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4498//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586729/HDFS-4832.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4498//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4498//console This message is automatically generated.
          Hide
          Kihwal Lee added a comment -

          +1 the patch looks good.

          Show
          Kihwal Lee added a comment - +1 the patch looks good.
          Hide
          Kihwal Lee added a comment -

          I've committed this to trunk, branch-2, branch-2.1.0-beta, and branch-0.23. Thanks for working on this patch, Ravi.

          Show
          Kihwal Lee added a comment - I've committed this to trunk, branch-2, branch-2.1.0-beta, and branch-0.23. Thanks for working on this patch, Ravi.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-trunk-Commit #3881 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3881/)
          HDFS-4832. Namenode doesn't change the number of missing blocks in safemode when DNs rejoin or leave. Contributed by Ravi Prakash. (Revision 1490803)

          Result = SUCCESS
          kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490803
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
          Show
          Hudson added a comment - Integrated in Hadoop-trunk-Commit #3881 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3881/ ) HDFS-4832 . Namenode doesn't change the number of missing blocks in safemode when DNs rejoin or leave. Contributed by Ravi Prakash. (Revision 1490803) Result = SUCCESS kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490803 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java

            People

            • Assignee:
              Ravi Prakash
              Reporter:
              Ravi Prakash
            • Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development