Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.0.0-alpha2
    • Component/s: balancer & mover
    • Labels:
      None

      Description

      This is to redo of the fix in HDFS-10598

      1. HDFS-10808.001.patch
        29 kB
        Anu Engineer
      2. HDFS-10808.002.patch
        28 kB
        Anu Engineer

        Issue Links

          Activity

          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Anu Engineer Could you elaborate a little bit more about the problems in the HDFS-10598 patch?

          Thanks.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Anu Engineer Could you elaborate a little bit more about the problems in the HDFS-10598 patch? Thanks.
          Hide
          anu Anu Engineer added a comment - - edited

          Lei (Eddy) Xu Sure. I think the change needed was –

           for (Map.Entry<VolumePair, DiskBalancerWorkItem> entry :
                      workMap.entrySet()) {
                    blockMover.clearExitFlag();
                    blockMover.copyBlocks(entry.getKey(), entry.getValue());
                  }
          

          The missing line was the clearExitFlag. No real changes are needed in the copyBlocks path.
          Also we seem to have introduced some unintended side effects like some lines becoming no-op. For example,

          if (!shouldRun()) {
           continue;	
          }	
          

          Looks like the intend of the patch was to remove the setExitFlag usage in copyBlocks, but we reintroduced that in the try {..} finally {..} block. I also wanted to add a unit test with multiple volumes in a single machine where balancing happens to check for this kind of failure in future.

          Show
          anu Anu Engineer added a comment - - edited Lei (Eddy) Xu Sure. I think the change needed was – for (Map.Entry<VolumePair, DiskBalancerWorkItem> entry : workMap.entrySet()) { blockMover.clearExitFlag(); blockMover.copyBlocks(entry.getKey(), entry.getValue()); } The missing line was the clearExitFlag. No real changes are needed in the copyBlocks path. Also we seem to have introduced some unintended side effects like some lines becoming no-op. For example, if (!shouldRun()) { continue ; } Looks like the intend of the patch was to remove the setExitFlag usage in copyBlocks, but we reintroduced that in the try {..} finally {..} block. I also wanted to add a unit test with multiple volumes in a single machine where balancing happens to check for this kind of failure in future.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 8m 37s trunk passed
          +1 compile 0m 52s trunk passed
          +1 checkstyle 0m 29s trunk passed
          +1 mvnsite 1m 0s trunk passed
          +1 mvneclipse 0m 12s trunk passed
          +1 findbugs 1m 49s trunk passed
          +1 javadoc 0m 57s trunk passed
          +1 mvninstall 0m 58s the patch passed
          +1 compile 0m 52s the patch passed
          +1 javac 0m 52s the patch passed
          -0 checkstyle 0m 26s hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 11 unchanged - 5 fixed = 18 total (was 16)
          +1 mvnsite 0m 59s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 54s the patch passed
          +1 javadoc 0m 56s the patch passed
          -1 unit 66m 28s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 18s The patch does not generate ASF License warnings.
          88m 35s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestEncryptionZones
            hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Issue HDFS-10808
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12826079/HDFS-10808.001.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux feb99290c21a 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 5d1609d
          Default Java 1.8.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16569/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/16569/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16569/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16569/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 8m 37s trunk passed +1 compile 0m 52s trunk passed +1 checkstyle 0m 29s trunk passed +1 mvnsite 1m 0s trunk passed +1 mvneclipse 0m 12s trunk passed +1 findbugs 1m 49s trunk passed +1 javadoc 0m 57s trunk passed +1 mvninstall 0m 58s the patch passed +1 compile 0m 52s the patch passed +1 javac 0m 52s the patch passed -0 checkstyle 0m 26s hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 11 unchanged - 5 fixed = 18 total (was 16) +1 mvnsite 0m 59s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 54s the patch passed +1 javadoc 0m 56s the patch passed -1 unit 66m 28s hadoop-hdfs in the patch failed. +1 asflicense 0m 18s The patch does not generate ASF License warnings. 88m 35s Reason Tests Failed junit tests hadoop.hdfs.TestEncryptionZones   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Issue HDFS-10808 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12826079/HDFS-10808.001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux feb99290c21a 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 5d1609d Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16569/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/16569/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16569/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16569/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          eddyxu Lei (Eddy) Xu added a comment - - edited

          Anu Engineer Thanks for the explanations.

          • Is clearExitFlag() duplicated to setRunnable()?
          • Also we seem to have introduced some unintended side effects like some lines becoming no-op.

          I don't follow why it becomes no-ops. If someone else have set exitFlag, either way (using continue or break) here will stop the while loop, right?

          if (!shouldRun()) {
           continue;	
          }
          

          My understanding was that this is to check whether the other thread set this flag to stop DiskBalancer.

          • If you put a clearExitFlag() in the for loop:
          for (Map.Entry<VolumePair, DiskBalancerWorkItem> entry :
                      workMap.entrySet()) {
                    blockMover.clearExitFlag();
                    blockMover.copyBlocks(entry.getKey(), entry.getValue());
                  }
          

          then diskbalancer -cancel command can not actually stop BlockMover thread for multiple work items?

          Show
          eddyxu Lei (Eddy) Xu added a comment - - edited Anu Engineer Thanks for the explanations. Is clearExitFlag() duplicated to setRunnable() ? Also we seem to have introduced some unintended side effects like some lines becoming no-op. I don't follow why it becomes no-ops. If someone else have set exitFlag, either way (using continue or break ) here will stop the while loop, right? if (!shouldRun()) { continue ; } My understanding was that this is to check whether the other thread set this flag to stop DiskBalancer . If you put a clearExitFlag() in the for loop: for (Map.Entry<VolumePair, DiskBalancerWorkItem> entry : workMap.entrySet()) { blockMover.clearExitFlag(); blockMover.copyBlocks(entry.getKey(), entry.getValue()); } then diskbalancer -cancel command can not actually stop BlockMover thread for multiple work items?
          Hide
          anu Anu Engineer added a comment -

          All checkstyle warnings are "hides the field nature". One test failure is a timeout error and other failure is not related to this patch. Also verified that both tests pass on my local machine with this patch applied.

          Show
          anu Anu Engineer added a comment - All checkstyle warnings are "hides the field nature". One test failure is a timeout error and other failure is not related to this patch. Also verified that both tests pass on my local machine with this patch applied.
          Hide
          anu Anu Engineer added a comment -

          Lei (Eddy) Xu Thanks for your comments.

          Is clearExitFlag() duplicated to setRunnable()?

          Yes, you are absolutely right. Thanks for catching this. I should have called setRunnable instead of adding a new function. I will fix that and post a new patch.

          I don't follow why it becomes no-ops. If someone else have set exitFlag, either way (using continue or break) here will stop the while loop, right?

          Sorry my comment was cryptic. The intend of having the setExitFlag was to reduce the complexity of multiple exits from the loop. Yes, it is possible for us to set the exit from another thread, but that was not the main use case and we don't rely on setExitFlag from other threads. When we discuss cancel it would be apparent.

          then diskbalancer -cancel command can not actually stop BlockMover thread for multiple work items?

          That is a very valid concern. Fortunately for us, the way we do cancel is by cancelling the executor and not by relying on this flag. Please look at shutdownExecutor

          Show
          anu Anu Engineer added a comment - Lei (Eddy) Xu Thanks for your comments. Is clearExitFlag() duplicated to setRunnable()? Yes, you are absolutely right. Thanks for catching this. I should have called setRunnable instead of adding a new function. I will fix that and post a new patch. I don't follow why it becomes no-ops. If someone else have set exitFlag, either way (using continue or break) here will stop the while loop, right? Sorry my comment was cryptic. The intend of having the setExitFlag was to reduce the complexity of multiple exits from the loop. Yes, it is possible for us to set the exit from another thread, but that was not the main use case and we don't rely on setExitFlag from other threads. When we discuss cancel it would be apparent. then diskbalancer -cancel command can not actually stop BlockMover thread for multiple work items? That is a very valid concern. Fortunately for us, the way we do cancel is by cancelling the executor and not by relying on this flag. Please look at shutdownExecutor
          Hide
          anu Anu Engineer added a comment -

          uploading new patch that removes "clearExitFlag" based on review comments from Lei (Eddy) Xu

          Show
          anu Anu Engineer added a comment - uploading new patch that removes "clearExitFlag" based on review comments from Lei (Eddy) Xu
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 7m 25s trunk passed
          +1 compile 0m 48s trunk passed
          +1 checkstyle 0m 26s trunk passed
          +1 mvnsite 0m 52s trunk passed
          +1 mvneclipse 0m 12s trunk passed
          +1 findbugs 1m 42s trunk passed
          +1 javadoc 0m 56s trunk passed
          +1 mvninstall 0m 48s the patch passed
          +1 compile 0m 45s the patch passed
          +1 javac 0m 45s the patch passed
          -0 checkstyle 0m 23s hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 11 unchanged - 5 fixed = 18 total (was 16)
          +1 mvnsite 0m 52s the patch passed
          +1 mvneclipse 0m 9s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 49s the patch passed
          +1 javadoc 0m 55s the patch passed
          +1 unit 66m 0s hadoop-hdfs in the patch passed.
          +1 asflicense 0m 20s The patch does not generate ASF License warnings.
          85m 54s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Issue HDFS-10808
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12826118/HDFS-10808.002.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux dd1005035716 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / cd5e10c
          Default Java 1.8.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16571/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16571/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16571/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 7m 25s trunk passed +1 compile 0m 48s trunk passed +1 checkstyle 0m 26s trunk passed +1 mvnsite 0m 52s trunk passed +1 mvneclipse 0m 12s trunk passed +1 findbugs 1m 42s trunk passed +1 javadoc 0m 56s trunk passed +1 mvninstall 0m 48s the patch passed +1 compile 0m 45s the patch passed +1 javac 0m 45s the patch passed -0 checkstyle 0m 23s hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 11 unchanged - 5 fixed = 18 total (was 16) +1 mvnsite 0m 52s the patch passed +1 mvneclipse 0m 9s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 49s the patch passed +1 javadoc 0m 55s the patch passed +1 unit 66m 0s hadoop-hdfs in the patch passed. +1 asflicense 0m 20s The patch does not generate ASF License warnings. 85m 54s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Issue HDFS-10808 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12826118/HDFS-10808.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux dd1005035716 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / cd5e10c Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16571/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16571/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16571/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Thanks for the explanation, Anu Engineer

          The intend of having the setExitFlag was to reduce the complexity of multiple exits from the loop.

          Wouldn't break is the simplest way to exit the while loop?

          Fortunately for us, the way we do cancel is by cancelling the executor and not by relying on this flag.

          It makes me wonder why we have the flag in the first place. And for the comments of the following code:

                // check if someone told us exit, treat this as an interruption
                      // point
                      // for the thread, since both getNextBlock and moveBlocAcrossVolume
                      // can take some time.
                      if (!shouldRun()) {
                        break;
                      }
          

          If shouldRun is not used for canceling, it is very confused to me because all the exitFlag and shouldRun() are only consumed within copyBlocks(), and use atomic boolean for shouldRun flag.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Thanks for the explanation, Anu Engineer The intend of having the setExitFlag was to reduce the complexity of multiple exits from the loop. Wouldn't break is the simplest way to exit the while loop? Fortunately for us, the way we do cancel is by cancelling the executor and not by relying on this flag. It makes me wonder why we have the flag in the first place. And for the comments of the following code: // check if someone told us exit, treat this as an interruption // point // for the thread, since both getNextBlock and moveBlocAcrossVolume // can take some time. if (!shouldRun()) { break ; } If shouldRun is not used for canceling, it is very confused to me because all the exitFlag and shouldRun() are only consumed within copyBlocks() , and use atomic boolean for shouldRun flag.
          Hide
          anu Anu Engineer added a comment -

          Lei (Eddy) Xu Thank you very much for taking time out to look at the code and ask me these pertinent questions.

          Wouldn't break is the simplest way to exit the while loop?

          if you had one or two breaks may be. But as the number of breaks increase, the control flow graph becomes quite complex. So it becomes harder to reason about the resources and other states from each exit point. An easy way of thinking about copyBlocks would be to think of it as a simple state machine. if you were implementing a state machine, you would probably move to state "exit" and use the default mechanisms of the state machine to handle the exit state instead of breaking out at each error condition. copyBlocks is following that pattern. So while in generic case I agree with you, in this specific case I think this pattern produces code that is easier to reason.

          on the comment.

          // check if someone told us exit, treat this as an interruption
          // point for the thread, since both getNextBlock and moveBlocAcrossVolume
          // can take some time.
          

          I was under the impression that comment is quite clear, may be I am mistaken.

          There are several conditions under which we would like to exit in the copy blocks thread. Some of them are states. Some are actions with clear side effects. What we are trying to do is minimize the effects of both. So we introduce the notion of "interruption points" in our copy thread. That is when we invoke a function and if we encounter a failure condition, we flag that information so that at the next safe point to bail, we will. In other words, we don't exit at the point of error, but simply set the state so that thread can proceed to a point where it considers that it is safe for it to exit.

          Examples of action with side effects are, copy of data block but metadata is not still copied or getting a bunch of disk errors (we wait till 5) before we can get out etc, or finding a block and before we can get to it, it disappears underneath us. Since we have all these kinds of external conditions to take care of, we simply set up a flag telling the system to exit cleanly. This paradigm gives us a centralized exit handler, so if the thread had to do some specific cleanup based on certain error, it is still possible to chain those error handlers at the exit point.

          Yes, the Atomic nature of shouldRun flag is confusing and perhaps not needed. It is an artifact of playing around with copying multiple blocks when I was developing code. It had a different structure, but then I found that enforcing bandwidth was harder and decided to do single block copy at a time.

          I really appreciate you taking time out to ask these questions and helping to make sure that I am in the right path.

          Show
          anu Anu Engineer added a comment - Lei (Eddy) Xu Thank you very much for taking time out to look at the code and ask me these pertinent questions. Wouldn't break is the simplest way to exit the while loop? if you had one or two breaks may be. But as the number of breaks increase, the control flow graph becomes quite complex. So it becomes harder to reason about the resources and other states from each exit point. An easy way of thinking about copyBlocks would be to think of it as a simple state machine. if you were implementing a state machine, you would probably move to state "exit" and use the default mechanisms of the state machine to handle the exit state instead of breaking out at each error condition. copyBlocks is following that pattern. So while in generic case I agree with you, in this specific case I think this pattern produces code that is easier to reason. on the comment. // check if someone told us exit, treat this as an interruption // point for the thread, since both getNextBlock and moveBlocAcrossVolume // can take some time. I was under the impression that comment is quite clear, may be I am mistaken. There are several conditions under which we would like to exit in the copy blocks thread. Some of them are states. Some are actions with clear side effects. What we are trying to do is minimize the effects of both. So we introduce the notion of "interruption points" in our copy thread. That is when we invoke a function and if we encounter a failure condition, we flag that information so that at the next safe point to bail, we will. In other words, we don't exit at the point of error, but simply set the state so that thread can proceed to a point where it considers that it is safe for it to exit. Examples of action with side effects are, copy of data block but metadata is not still copied or getting a bunch of disk errors (we wait till 5) before we can get out etc, or finding a block and before we can get to it, it disappears underneath us. Since we have all these kinds of external conditions to take care of, we simply set up a flag telling the system to exit cleanly. This paradigm gives us a centralized exit handler, so if the thread had to do some specific cleanup based on certain error, it is still possible to chain those error handlers at the exit point. Yes, the Atomic nature of shouldRun flag is confusing and perhaps not needed. It is an artifact of playing around with copying multiple blocks when I was developing code. It had a different structure, but then I found that enforcing bandwidth was harder and decided to do single block copy at a time. I really appreciate you taking time out to ask these questions and helping to make sure that I am in the right path.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          +1 for the v002 patch.

          Show
          arpitagarwal Arpit Agarwal added a comment - +1 for the v002 patch.
          Hide
          anu Anu Engineer added a comment -

          Arpit Agarwal & Lei (Eddy) Xu Thanks for the code review and the thoughtful comments and discussion. I will commit this to trunk soon.

          Show
          anu Anu Engineer added a comment - Arpit Agarwal & Lei (Eddy) Xu Thanks for the code review and the thoughtful comments and discussion. I will commit this to trunk soon.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10422 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10422/)
          HDFS-10808. DiskBalancer does not execute multi-steps plan-redux. (aengineer: rev bee9f57f5ca9f037ade932c6fd01b0dad47a1296)

          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancerWithMockMover.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancer.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10422 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10422/ ) HDFS-10808 . DiskBalancer does not execute multi-steps plan-redux. (aengineer: rev bee9f57f5ca9f037ade932c6fd01b0dad47a1296) (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancerWithMockMover.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancer.java
          Hide
          anu Anu Engineer added a comment -

          I have committed this to trunk

          Show
          anu Anu Engineer added a comment - I have committed this to trunk

            People

            • Assignee:
              anu Anu Engineer
              Reporter:
              anu Anu Engineer
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development