Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-7645

Rolling upgrade is restoring blocks from trash multiple times

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.6.0
    • Fix Version/s: 2.8.0, 2.7.2, 3.0.0-alpha1
    • Component/s: datanode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      When performing an HDFS rolling upgrade, the trash directory is getting restored twice when under normal circumstances it shouldn't need to be restored at all. iiuc, the only time these blocks should be restored is if we need to rollback a rolling upgrade.

      On a busy cluster, this can cause significant and unnecessary block churn both on the datanodes, and more importantly in the namenode.

      The two times this happens are:
      1) restart of DN onto new software

        private void doTransition(DataNode datanode, StorageDirectory sd,
            NamespaceInfo nsInfo, StartupOption startOpt) throws IOException {
          if (startOpt == StartupOption.ROLLBACK && sd.getPreviousDir().exists()) {
            Preconditions.checkState(!getTrashRootDir(sd).exists(),
                sd.getPreviousDir() + " and " + getTrashRootDir(sd) + " should not " +
                " both be present.");
            doRollback(sd, nsInfo); // rollback if applicable
          } else {
            // Restore all the files in the trash. The restored files are retained
            // during rolling upgrade rollback. They are deleted during rolling
            // upgrade downgrade.
            int restored = restoreBlockFilesFromTrash(getTrashRootDir(sd));
            LOG.info("Restored " + restored + " block files from trash.");
          }
      

      2) When heartbeat response no longer indicates a rollingupgrade is in progress

        /**
         * Signal the current rolling upgrade status as indicated by the NN.
         * @param inProgress true if a rolling upgrade is in progress
         */
        void signalRollingUpgrade(boolean inProgress) throws IOException {
          String bpid = getBlockPoolId();
          if (inProgress) {
            dn.getFSDataset().enableTrash(bpid);
            dn.getFSDataset().setRollingUpgradeMarker(bpid);
          } else {
            dn.getFSDataset().restoreTrash(bpid);
            dn.getFSDataset().clearRollingUpgradeMarker(bpid);
          }
        }
      

      HDFS-6800 and HDFS-6981 were modifying this behavior making it not completely clear whether this is somehow intentional.

      1. HDFS-7645.01.patch
        1 kB
        Keisuke Ogiwara
      2. HDFS-7645.02.patch
        8 kB
        Keisuke Ogiwara
      3. HDFS-7645.03.patch
        6 kB
        Keisuke Ogiwara
      4. HDFS-7645.04.patch
        7 kB
        Keisuke Ogiwara
      5. HDFS-7645.05.patch
        18 kB
        Vinayakumar B
      6. HDFS-7645.06.patch
        22 kB
        Vinayakumar B
      7. HDFS-7645.07.patch
        22 kB
        Vinayakumar B

        Issue Links

          Activity

          Hide
          cmccabe Colin P. McCabe added a comment -

          I think we should get rid of trash and just always create a previous/ directory when doing rolling upgrade, the same as we do with regular upgrade. The speed is clearly acceptable since we've done these upgrades in the field when switching to the blockid-based layout with no problems. And it will be a lot more maintainable and less confusing.

          Show
          cmccabe Colin P. McCabe added a comment - I think we should get rid of trash and just always create a previous/ directory when doing rolling upgrade, the same as we do with regular upgrade. The speed is clearly acceptable since we've done these upgrades in the field when switching to the blockid-based layout with no problems. And it will be a lot more maintainable and less confusing.
          Hide
          arpitagarwal Arpit Agarwal added a comment - - edited

          The first restore was by design when the rolling upgrade feature was added (HDFS-6005). It simplified the rollback procedure by not requiring the -rollback flag to the DataNode, so regular startup/rollback could be treated similarly by restoring from trash.

          HDFS-6800 added back the requirement to pass the -rollback flag during RU rollback, to support layout changes. The second restore was a side effect of the same fix. We can probably eliminate both restores now.

          I think we should get rid of trash and just always create a previous/ directory when doing rolling upgrade, the same as we do with regular upgrade. The speed is clearly acceptable since we've done these upgrades in the field when switching to the blockid-based layout with no problems. And it will be a lot more maintainable and less confusing.

          DN layout changes will be rare for minor/point releases. I am wary of eliminating trash without some numbers showing hard link performance with millions of blocks is on par with trash. Even a few seconds per DN adds up to many hours/days when upgrading thousands of DNs sequentially. Once we fix this issue raised by Nathan the overhead of trash as compared to regular startup is nil.

          Show
          arpitagarwal Arpit Agarwal added a comment - - edited The first restore was by design when the rolling upgrade feature was added ( HDFS-6005 ). It simplified the rollback procedure by not requiring the -rollback flag to the DataNode, so regular startup/rollback could be treated similarly by restoring from trash. HDFS-6800 added back the requirement to pass the -rollback flag during RU rollback, to support layout changes. The second restore was a side effect of the same fix. We can probably eliminate both restores now. I think we should get rid of trash and just always create a previous/ directory when doing rolling upgrade, the same as we do with regular upgrade. The speed is clearly acceptable since we've done these upgrades in the field when switching to the blockid-based layout with no problems. And it will be a lot more maintainable and less confusing. DN layout changes will be rare for minor/point releases. I am wary of eliminating trash without some numbers showing hard link performance with millions of blocks is on par with trash. Even a few seconds per DN adds up to many hours/days when upgrading thousands of DNs sequentially. Once we fix this issue raised by Nathan the overhead of trash as compared to regular startup is nil.
          Hide
          cmccabe Colin P. McCabe added a comment -

          DN layout changes will be rare for minor/point releases. I am wary of eliminating trash without some numbers showing hard link performance with millions of blocks is on par with trash. Even a few seconds per DN adds up to many hours/days when upgrading thousands of DNs sequentially. Once we fix this issue raised by Nathan the overhead of trash as compared to regular startup is nil.

          Yeah. Startup time during an upgrade is important. Our numbers for creating the "previous" directory in HDFS-6482 were about 1 second per 100,000 blocks. We also parallelized the hard link process across all volumes. So I would expect it to be very quick for the average DN, which has about 200k-400k blocks split across 10 storage directories.

          Anyway, I don't feel strongly about this... if we can make "trash" work, then so be it. It sounds like the fix is not that difficult.

          Show
          cmccabe Colin P. McCabe added a comment - DN layout changes will be rare for minor/point releases. I am wary of eliminating trash without some numbers showing hard link performance with millions of blocks is on par with trash. Even a few seconds per DN adds up to many hours/days when upgrading thousands of DNs sequentially. Once we fix this issue raised by Nathan the overhead of trash as compared to regular startup is nil. Yeah. Startup time during an upgrade is important. Our numbers for creating the "previous" directory in HDFS-6482 were about 1 second per 100,000 blocks. We also parallelized the hard link process across all volumes. So I would expect it to be very quick for the average DN, which has about 200k-400k blocks split across 10 storage directories. Anyway, I don't feel strongly about this... if we can make "trash" work, then so be it. It sounds like the fix is not that difficult.
          Hide
          ogikei Keisuke Ogiwara added a comment -

          I want to work on this ticket. Can you assign to me? Thank you.

          Show
          ogikei Keisuke Ogiwara added a comment - I want to work on this ticket. Can you assign to me? Thank you.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Thanks for volunteering to work on this Keisuke Ogiwara. I assigned it to you.

          Show
          arpitagarwal Arpit Agarwal added a comment - Thanks for volunteering to work on this Keisuke Ogiwara . I assigned it to you.
          Hide
          ogikei Keisuke Ogiwara added a comment -

          I attached patch, thank you.

          Show
          ogikei Keisuke Ogiwara added a comment - I attached patch, thank you.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Hi Keisuke Ogiwara, thank you for posting a patch. This fix looks incomplete.

          1. The trash must be restored on rollback. Fairly easy to fix this in the same function. If the rollback option was passed and previous exists we call doRollback. If previous does not exist, restore trash.
          2. On finalize, the trash directories must be deleted. I think this will be handled by signalRollingUpgrade but I'd have to check it to make sure.

          TestDataNodeRollingUpgrade should flag both these issues.

          Show
          arpitagarwal Arpit Agarwal added a comment - Hi Keisuke Ogiwara , thank you for posting a patch. This fix looks incomplete. The trash must be restored on rollback. Fairly easy to fix this in the same function. If the rollback option was passed and previous exists we call doRollback . If previous does not exist, restore trash. On finalize, the trash directories must be deleted. I think this will be handled by signalRollingUpgrade but I'd have to check it to make sure. TestDataNodeRollingUpgrade should flag both these issues.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Also the restore from signalRollingUpgrade pointed out by Nathan can probably be deleted.

          Show
          arpitagarwal Arpit Agarwal added a comment - Also the restore from signalRollingUpgrade pointed out by Nathan can probably be deleted.
          Hide
          vinayrpet Vinayakumar B added a comment -

          The trash must be restored on rollback. Fairly easy to fix this in the same function. If the rollback option was passed and previous exists we call doRollback. If previous does not exist, restore trash.

          This is exactly does the job when the datanode is rolled back. But problem is (as from the beginning) entire cluster ( including those DNs who have not yet upgraded) must be restarted with '-rollback' option to restore.

          On finalize, the trash directories must be deleted. I think this will be handled by signalRollingUpgrade but I'd have to check it to make sure

          Need to add downgrade also to this list. Current way of checking just the boolean status will not work. Datanode has to be sure that NameNode is downgraded/finalized ( even should consider multiple restart of NameNode's after this operation). but not rolled back, before deleting the trash.
          Otherwise Datanode will endup in deleting trash even for rollback of NameNode, which might lead to block miss.

          Show
          vinayrpet Vinayakumar B added a comment - The trash must be restored on rollback. Fairly easy to fix this in the same function. If the rollback option was passed and previous exists we call doRollback. If previous does not exist, restore trash. This is exactly does the job when the datanode is rolled back. But problem is (as from the beginning) entire cluster ( including those DNs who have not yet upgraded) must be restarted with '-rollback' option to restore. On finalize, the trash directories must be deleted. I think this will be handled by signalRollingUpgrade but I'd have to check it to make sure Need to add downgrade also to this list. Current way of checking just the boolean status will not work. Datanode has to be sure that NameNode is downgraded/finalized ( even should consider multiple restart of NameNode's after this operation). but not rolled back, before deleting the trash. Otherwise Datanode will endup in deleting trash even for rollback of NameNode, which might lead to block miss.
          Hide
          ogikei Keisuke Ogiwara added a comment -

          I attached new patch.Please review it.Thank you.

          Show
          ogikei Keisuke Ogiwara added a comment - I attached new patch.Please review it.Thank you.
          Hide
          vinayrpet Vinayakumar B added a comment -

          Hi Keisuke Ogiwara, thanks for the new patch.

          1. With following change is, thrash will never be enabled. Am I right? If yes, this can be removed.

               if (inProgress) {
          -      dn.getFSDataset().enableTrash(bpid);
                 dn.getFSDataset().setRollingUpgradeMarker(bpid);
               } else {

          2. In below check, also need to check for startOpt == StartupOption.ROLLBACK. Otherwise just for normal restart also thrash will be restored. Which is nothing but the old behaviour itself. Am I missing something?

          -    } else {
          +    } else if (!sd.getPreviousDir().exists()) {

          One more question, not exactly related to patch.
          When the thrash will be deleted for finalize/downgrade ?

          Show
          vinayrpet Vinayakumar B added a comment - Hi Keisuke Ogiwara , thanks for the new patch. 1. With following change is, thrash will never be enabled. Am I right? If yes, this can be removed. if (inProgress) { - dn.getFSDataset().enableTrash(bpid); dn.getFSDataset().setRollingUpgradeMarker(bpid); } else { 2. In below check, also need to check for startOpt == StartupOption.ROLLBACK . Otherwise just for normal restart also thrash will be restored. Which is nothing but the old behaviour itself. Am I missing something? - } else { + } else if (!sd.getPreviousDir().exists()) { One more question, not exactly related to patch. When the thrash will be deleted for finalize/downgrade ?
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12702435/HDFS-7645.02.patch
          against trunk revision 3560180.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.balancer.TestBalancer

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9729//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9729//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12702435/HDFS-7645.02.patch against trunk revision 3560180. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.balancer.TestBalancer Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9729//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9729//console This message is automatically generated.
          Hide
          ogikei Keisuke Ogiwara added a comment -

          Hi Vinayakumar B, Thanks for the review.Your advice is great. I attached new patch. Please review it, thank you.

          And I have searched that trash will be deleted for finalize/downgrade. But after I updated the code, assertion errors happened in testWithLayoutChangeAndFinalize() method, testDatanodeRollingUpgradeWithFinalize() method and testWithLayoutChangeAndRollback() method of class TestDataNodeRollingUpgrade. Is the updated code related with these errors?

          Show
          ogikei Keisuke Ogiwara added a comment - Hi Vinayakumar B, Thanks for the review.Your advice is great. I attached new patch. Please review it, thank you. And I have searched that trash will be deleted for finalize/downgrade. But after I updated the code, assertion errors happened in testWithLayoutChangeAndFinalize() method, testDatanodeRollingUpgradeWithFinalize() method and testWithLayoutChangeAndRollback() method of class TestDataNodeRollingUpgrade. Is the updated code related with these errors?
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12702780/HDFS-7645.03.patch
          against trunk revision 5e9b814.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade

          The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestAppendSnapshotTruncate

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9747//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9747//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12702780/HDFS-7645.03.patch against trunk revision 5e9b814. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestAppendSnapshotTruncate Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9747//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9747//console This message is automatically generated.
          Hide
          kihwal Kihwal Lee added a comment -

          I think you want clearTrash() to be called when a rolling upgrade is finalized. That is if inProgress is not true, clear all trash. For regular or downgrade start-ups, if the rolling upgrade is already aborted/finallized, the trash will get cleared once the datanode registers with the namenode. So we don't have to anything special on start-up.

          As for breaking test cases, when there is a layout change during rolling upgrade, the old blocks are moved to previous. The current code do this by restoring trash and going through doUpgrade(). In order to keep this behavior, doTransition() needs to check for layout change (but ignore ctime change) and does restore, before the check for calling doUpgrade(). I tried the following and it seems to work.

              if (this.layoutVersion > HdfsConstants.DATANODE_LAYOUT_VERSION) {
                int restored = restoreBlockFilesFromTrash(getTrashRootDir(sd));
                LOG.info("Restored " + restored + " block files from trash " +
                    "before the layout upgrade. These blocks will be moved to " +
                    "the previous directory during the upgrade");
              }
          
          Show
          kihwal Kihwal Lee added a comment - I think you want clearTrash() to be called when a rolling upgrade is finalized. That is if inProgress is not true, clear all trash. For regular or downgrade start-ups, if the rolling upgrade is already aborted/finallized, the trash will get cleared once the datanode registers with the namenode. So we don't have to anything special on start-up. As for breaking test cases, when there is a layout change during rolling upgrade, the old blocks are moved to previous . The current code do this by restoring trash and going through doUpgrade() . In order to keep this behavior, doTransition() needs to check for layout change (but ignore ctime change) and does restore, before the check for calling doUpgrade() . I tried the following and it seems to work. if ( this .layoutVersion > HdfsConstants.DATANODE_LAYOUT_VERSION) { int restored = restoreBlockFilesFromTrash(getTrashRootDir(sd)); LOG.info( "Restored " + restored + " block files from trash " + "before the layout upgrade. These blocks will be moved to " + "the previous directory during the upgrade" ); }
          Hide
          arpitagarwal Arpit Agarwal added a comment - - edited

          Hi Keisuke Ogiwara, did you get a chance to read the comments I added for the v1 patch?

          Could you please describe what your patch is attempting to do? Is it your intention to get rid of trash completely, because removing enableTrash will have that effect.

          This is exactly does the job when the datanode is rolled back. But problem is (as from the beginning) entire cluster ( including those DNs who have not yet upgraded) must be restarted with '-rollback' option to restore.

          Vinayakumar B, we already require the cluster to be stopped and DNs to be restarted with -rollback to proceed with the rollback so we can support DN layout upgrades. Not sure I understand what you meant.

          I think you want clearTrash() to be called when a rolling upgrade is finalized. That is if inProgress is not true, clear all trash. For regular or downgrade start-ups, if the rolling upgrade is already aborted/finallized, the trash will get cleared once the datanode registers with the namenode. So we don't have to anything special on start-up.

          Clearing trash is probably the right thing to do but there is a caveat. DNs do not get a 'finalize rolling upgrade' indication. DNs look for RollingUpgradeStatus in the heartbeat response. If it is absent then DNs infer that the rolling upgrade is finalized. If the administrator attempts to do a rollback without stopping all DNs first then clearing trash will cause data loss. That's a risk of clearing vs doing restore. Currently with restore there is no such risk since NNs will either keep or delete the blocks appropriately.

          Show
          arpitagarwal Arpit Agarwal added a comment - - edited Hi Keisuke Ogiwara , did you get a chance to read the comments I added for the v1 patch? Could you please describe what your patch is attempting to do? Is it your intention to get rid of trash completely, because removing enableTrash will have that effect. This is exactly does the job when the datanode is rolled back. But problem is (as from the beginning) entire cluster ( including those DNs who have not yet upgraded) must be restarted with '-rollback' option to restore. Vinayakumar B , we already require the cluster to be stopped and DNs to be restarted with -rollback to proceed with the rollback so we can support DN layout upgrades. Not sure I understand what you meant. I think you want clearTrash() to be called when a rolling upgrade is finalized. That is if inProgress is not true, clear all trash. For regular or downgrade start-ups, if the rolling upgrade is already aborted/finallized, the trash will get cleared once the datanode registers with the namenode. So we don't have to anything special on start-up. Clearing trash is probably the right thing to do but there is a caveat. DNs do not get a 'finalize rolling upgrade' indication. DNs look for RollingUpgradeStatus in the heartbeat response. If it is absent then DNs infer that the rolling upgrade is finalized. If the administrator attempts to do a rollback without stopping all DNs first then clearing trash will cause data loss. That's a risk of clearing vs doing restore. Currently with restore there is no such risk since NNs will either keep or delete the blocks appropriately.
          Hide
          ogikei Keisuke Ogiwara added a comment -

          I have attached a new patch. Please review it when you are free. Thank you very much.

          Show
          ogikei Keisuke Ogiwara added a comment - I have attached a new patch. Please review it when you are free. Thank you very much.
          Hide
          vinayrpet Vinayakumar B added a comment - - edited

          DNs look for RollingUpgradeStatus in the heartbeat response. If it is absent then DNs infer that the rolling upgrade is finalized. If the administrator attempts to do a rollback without stopping all DNs first then clearing trash will cause data loss.

          Even though administrator does it by mistake, it will be a irrecoverable data loss.
          Just to avoid this, How about having the finalized RollingUpgradeStatus in the NameNode once the upgrade is finalized instead of making it null.?
          And in DNs we can check specifically check for the FINALIZED status before clearing the trash.

          Any thoughts ?

          Show
          vinayrpet Vinayakumar B added a comment - - edited DNs look for RollingUpgradeStatus in the heartbeat response. If it is absent then DNs infer that the rolling upgrade is finalized. If the administrator attempts to do a rollback without stopping all DNs first then clearing trash will cause data loss. Even though administrator does it by mistake, it will be a irrecoverable data loss. Just to avoid this, How about having the finalized RollingUpgradeStatus in the NameNode once the upgrade is finalized instead of making it null.? And in DNs we can check specifically check for the FINALIZED status before clearing the trash. Any thoughts ?
          Hide
          vinayrpet Vinayakumar B added a comment -

          Attaching patch for keeping the RollingUpgradeStatus in NN after finalization. On top of Keisuke Ogiwara's work.

          Show
          vinayrpet Vinayakumar B added a comment - Attaching patch for keeping the RollingUpgradeStatus in NN after finalization. On top of Keisuke Ogiwara 's work.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12705115/HDFS-7645.05.patch
          against trunk revision a89b087.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9937//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9937//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12705115/HDFS-7645.05.patch against trunk revision a89b087. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9937//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9937//console This message is automatically generated.
          Hide
          kihwal Kihwal Lee added a comment -

          It has been a while since I saw an hdfs precommit working and returning with all +1s.

          Show
          kihwal Kihwal Lee added a comment - It has been a while since I saw an hdfs precommit working and returning with all +1s.
          Hide
          kihwal Kihwal Lee added a comment -

          How will it work when old datanodes are talking to a new namenode? When a RU is finalized in the middle, the old datanodes will still think the RU is in progress. If we assume they will get upgraded soon after, this might not be a problem. Otherwise their trash will grow. In the opposite situation, newer datanodes will not clear trash until the namenode is upgraded and the upgrade finalized. But this should not happen if the normal upgrade procedure is followed. May be it might happen in some cases for federated clusters. In all cases, the delay in clearing trash will be temporary. Agree?

          The rest of patch looks fine.

          Show
          kihwal Kihwal Lee added a comment - How will it work when old datanodes are talking to a new namenode? When a RU is finalized in the middle, the old datanodes will still think the RU is in progress. If we assume they will get upgraded soon after, this might not be a problem. Otherwise their trash will grow. In the opposite situation, newer datanodes will not clear trash until the namenode is upgraded and the upgrade finalized. But this should not happen if the normal upgrade procedure is followed. May be it might happen in some cases for federated clusters. In all cases, the delay in clearing trash will be temporary. Agree? The rest of patch looks fine.
          Hide
          arpitagarwal Arpit Agarwal added a comment - - edited

          The v5 patch from Vinayakumar B looks pretty good to me. Kihwal, I agree on both points. In either case the delay is normal if the administrator does not follow the upgrade sequence and it is good to err on the side of retaining data.

          Would you consider adding the following tests?

          1. Two successive rolling upgrades.
          2. Regular upgrade initiated immediately after the rolling upgrade is completed to make sure we correctly handle RollingUpgradeStatus.finalized and DNA_FINALIZE together.
          Show
          arpitagarwal Arpit Agarwal added a comment - - edited The v5 patch from Vinayakumar B looks pretty good to me. Kihwal, I agree on both points. In either case the delay is normal if the administrator does not follow the upgrade sequence and it is good to err on the side of retaining data. Would you consider adding the following tests? Two successive rolling upgrades. Regular upgrade initiated immediately after the rolling upgrade is completed to make sure we correctly handle RollingUpgradeStatus.finalized and DNA_FINALIZE together.
          Hide
          vinayrpet Vinayakumar B added a comment -

          Two successive rolling upgrades.

          Done

          Regular upgrade initiated immediately after the rolling upgrade is completed to make sure we correctly handle RollingUpgradeStatus.finalized and DNA_FINALIZE together.

          I have added a test for this. But didn't understand how "RollingUpgradeStatus.finalized and DNA_FINALIZE" will come. So didnt add any assertions.

          After restart RollingUpgradeStatus will be non-null after finalize only till NN is restarted. Once its restarted it will be null.

          Show
          vinayrpet Vinayakumar B added a comment - Two successive rolling upgrades. Done Regular upgrade initiated immediately after the rolling upgrade is completed to make sure we correctly handle RollingUpgradeStatus.finalized and DNA_FINALIZE together. I have added a test for this. But didn't understand how "RollingUpgradeStatus.finalized and DNA_FINALIZE" will come. So didnt add any assertions. After restart RollingUpgradeStatus will be non-null after finalize only till NN is restarted. Once its restarted it will be null.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12707736/HDFS-7645.06.patch
          against trunk revision af618f2.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10087//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10087//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12707736/HDFS-7645.06.patch against trunk revision af618f2. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10087//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10087//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12707736/HDFS-7645.06.patch
          against trunk revision 1ed9fb7.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10107//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10107//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12707736/HDFS-7645.06.patch against trunk revision 1ed9fb7. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10107//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10107//console This message is automatically generated.
          Hide
          vinayrpet Vinayakumar B added a comment -

          Fixed failure

          Show
          vinayrpet Vinayakumar B added a comment - Fixed failure
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12708137/HDFS-7645.07.patch
          against trunk revision ae3e8c6.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10109//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10109//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12708137/HDFS-7645.07.patch against trunk revision ae3e8c6. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10109//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10109//console This message is automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          I am reviewing the latest patch.

          Show
          arpitagarwal Arpit Agarwal added a comment - I am reviewing the latest patch.
          Hide
          arpitagarwal Arpit Agarwal added a comment - - edited

          +1, thanks for adding the tests. I will commit it shortly

          Kihwal Lee, are you okay with the latest patch? Thanks.

          Show
          arpitagarwal Arpit Agarwal added a comment - - edited +1, thanks for adding the tests. I will commit it shortly Kihwal Lee , are you okay with the latest patch? Thanks.
          Hide
          kihwal Kihwal Lee added a comment -

          +1 lgtm.

          Show
          kihwal Kihwal Lee added a comment - +1 lgtm.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          I committed it to trunk and branch-2.

          Thanks for the contribution Vinayakumar B and Keisuke Ogiwara. I credited you both for the patch.

          Show
          arpitagarwal Arpit Agarwal added a comment - I committed it to trunk and branch-2. Thanks for the contribution Vinayakumar B and Keisuke Ogiwara . I credited you both for the patch.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #7471 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7471/)
          HDFS-7645. Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7471 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7471/ ) HDFS-7645 . Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
          Hide
          vinayrpet Vinayakumar B added a comment -

          Thanks everyone.

          Show
          vinayrpet Vinayakumar B added a comment - Thanks everyone.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #149 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/149/)
          HDFS-7645. Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #149 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/149/ ) HDFS-7645 . Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #883 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/883/)
          HDFS-7645. Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #883 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/883/ ) HDFS-7645 . Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2081 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2081/)
          HDFS-7645. Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2081 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2081/ ) HDFS-7645 . Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #140 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/140/)
          HDFS-7645. Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #140 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/140/ ) HDFS-7645 . Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #149 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/149/)
          HDFS-7645. Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #149 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/149/ ) HDFS-7645 . Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2099 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2099/)
          HDFS-7645. Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2099 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2099/ ) HDFS-7645 . Rolling upgrade is restoring blocks from trash multiple times (Contributed by Vinayakumar B and Keisuke Ogiwara) (arp: rev 1a495fbb489c9e9a23b341a52696d10e9e272b04) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeRollingUpgrade.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeStatus.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/RollingUpgradeInfo.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
          Hide
          andrew.wang Andrew Wang added a comment -

          This change is incompatible since we expose RollingUpgradeInfo in the NN's JMX (a public API). As discussed above, rather than being null on finalization, it now sets the finalization time.

          Have we thought about other ways of solving this issue? Else we can change the JMX method to still return null on finalization.

          Show
          andrew.wang Andrew Wang added a comment - This change is incompatible since we expose RollingUpgradeInfo in the NN's JMX (a public API). As discussed above, rather than being null on finalization, it now sets the finalization time. Have we thought about other ways of solving this issue? Else we can change the JMX method to still return null on finalization.
          Hide
          vinayrpet Vinayakumar B added a comment -

          This change is incompatible since we expose RollingUpgradeInfo in the NN's JMX (a public API). As discussed above, rather than being null on finalization, it now sets the finalization time.

          Oh! Thanks Andrew Wang for pointing out. That was a miss.

          Have we thought about other ways of solving this issue? Else we can change the JMX method to still return null on finalization.

          Since DN side wanted to differentiate between FINALIZED rollingupgrade status and rolledback status, Setting the finalizetime on finalization.

          Else we can change the JMX method to still return null on finalization.

          We can do this if this fix is backported to stable branches. Currently its only available in branch-2.
          If not so critical to change it back, then we can add a release note indicating the change.

          Note that, ClientProtocol#rollingUpgrade(..) also changed to return non-null finalized status as well.

          Show
          vinayrpet Vinayakumar B added a comment - This change is incompatible since we expose RollingUpgradeInfo in the NN's JMX (a public API). As discussed above, rather than being null on finalization, it now sets the finalization time. Oh! Thanks Andrew Wang for pointing out. That was a miss. Have we thought about other ways of solving this issue? Else we can change the JMX method to still return null on finalization. Since DN side wanted to differentiate between FINALIZED rollingupgrade status and rolledback status, Setting the finalizetime on finalization. Else we can change the JMX method to still return null on finalization. We can do this if this fix is backported to stable branches. Currently its only available in branch-2. If not so critical to change it back, then we can add a release note indicating the change. Note that, ClientProtocol#rollingUpgrade(..) also changed to return non-null finalized status as well.
          Hide
          andrew.wang Andrew Wang added a comment -

          Hey Vinay,

          It might be okay to sneak in this incompat change, I doubt there are many users of this API. It's also possible to write an "after" check will work with both old and new NNs to check for finalization:

          // before
          if (ruinfo == null)
          // after
          if (ruinfo == null || ruinfo.isFinalized())
          

          One related change we could also make is adding boolean isStarted and isFinalized to the JMX output, since that way callers won't have to do a "!= 0" check. Essentially all the normal benefits of a getter. I just filed HDFS-8656 to do this.

          In hindsight it would have been nice to always return an RUInfo so the check could just be if (ruinfo.isFinalized). The need for null checking is a bit ugly.

          Show
          andrew.wang Andrew Wang added a comment - Hey Vinay, It might be okay to sneak in this incompat change, I doubt there are many users of this API. It's also possible to write an "after" check will work with both old and new NNs to check for finalization: // before if (ruinfo == null ) // after if (ruinfo == null || ruinfo.isFinalized()) One related change we could also make is adding boolean isStarted and isFinalized to the JMX output, since that way callers won't have to do a "!= 0" check. Essentially all the normal benefits of a getter. I just filed HDFS-8656 to do this. In hindsight it would have been nice to always return an RUInfo so the check could just be if (ruinfo.isFinalized) . The need for null checking is a bit ugly.
          Hide
          andrew.wang Andrew Wang added a comment -

          I actually discovered that HDFS-7894 "fixed" this for the JMX by adding a check for isRollingUpgrade(). I changed HDFS-8656 to also do this for the ClientProtocol API, please review there if interested.

          Show
          andrew.wang Andrew Wang added a comment - I actually discovered that HDFS-7894 "fixed" this for the JMX by adding a check for isRollingUpgrade() . I changed HDFS-8656 to also do this for the ClientProtocol API, please review there if interested.
          Hide
          kihwal Kihwal Lee added a comment -

          We should fix this in 2.7.2. That means pulling HDFS-8656 as well. Any objections?

          Show
          kihwal Kihwal Lee added a comment - We should fix this in 2.7.2. That means pulling HDFS-8656 as well. Any objections?
          Hide
          ctrezzo Chris Trezzo added a comment -

          +1 from me. This might be a good candidate for 2.6.2 as well.

          Show
          ctrezzo Chris Trezzo added a comment - +1 from me. This might be a good candidate for 2.6.2 as well.
          Hide
          ctrezzo Chris Trezzo added a comment -
          Show
          ctrezzo Chris Trezzo added a comment - /cc Sangjin Lee
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Hey folks,

          Trying to the understand the compatibility story here in the context of 2.7.2, without much of a domain expertise.

          This JIRA is marked as incompatible. Now, is the combination of this JIRA (HDFS-7645) and the follow up JIRA (HDFS-8656) plus the newly issue discovered issue+fix (HDFS-9426) considered a compatible change? If not, does it make sense to drop all the three from branch-2 and only put them on trunk?

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Hey folks, Trying to the understand the compatibility story here in the context of 2.7.2, without much of a domain expertise. This JIRA is marked as incompatible. Now, is the combination of this JIRA ( HDFS-7645 ) and the follow up JIRA ( HDFS-8656 ) plus the newly issue discovered issue+fix ( HDFS-9426 ) considered a compatible change? If not, does it make sense to drop all the three from branch-2 and only put them on trunk?
          Hide
          kihwal Kihwal Lee added a comment -

          HDFS-8656 addressed the incompatibility in ClientProtocol. HDFS-9426 tries to address the DatanodeProtocol issue.

          Show
          kihwal Kihwal Lee added a comment - HDFS-8656 addressed the incompatibility in ClientProtocol. HDFS-9426 tries to address the DatanodeProtocol issue.
          Hide
          djp Junping Du added a comment -

          Hi Kihwal Lee, do you suggest we should backport these three fixes (this JIRA, HDFS-8656 and HDFS-9426) to branch-2.6?

          Show
          djp Junping Du added a comment - Hi Kihwal Lee , do you suggest we should backport these three fixes (this JIRA, HDFS-8656 and HDFS-9426 ) to branch-2.6?

            People

            • Assignee:
              ogikei Keisuke Ogiwara
              Reporter:
              nroberts Nathan Roberts
            • Votes:
              0 Vote for this issue
              Watchers:
              18 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development