Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-4878

On Remove Block, Block is not Removed from neededReplications queue

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0
    • Fix Version/s: 2.1.0-beta, 0.23.9
    • Component/s: namenode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Remove block removes the specified block from pendingReplications, but not from neededReplications queue.

      The fix would be to remove from neededReplications as well.

      1. HDFS-4878.patch
        5 kB
        Tao Luo
      2. HDFS-4878.branch-0.23.patch
        4 kB
        Ravi Prakash
      3. HDFS-4878_branch2.patch
        5 kB
        Tao Luo

        Issue Links

          Activity

          Hide
          Tao Luo added a comment -

          Removing block from neededReplications queue when removeBlock() is called

          Show
          Tao Luo added a comment - Removing block from neededReplications queue when removeBlock() is called
          Hide
          Tao Luo added a comment -

          HDFS-4867 NPE is a symptom of the block not being removed from the neededReplications queue.

          Show
          Tao Luo added a comment - HDFS-4867 NPE is a symptom of the block not being removed from the neededReplications queue.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12586178/HDFS-4878.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4477//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4477//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586178/HDFS-4878.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4477//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4477//console This message is automatically generated.
          Hide
          Konstantin Shvachko added a comment -

          Tao could you please update your patch so that it doesn't change the order of imports in BlockManager.java.
          You might also want to update testMetaSaveWithOrphanedBlocks() - name it something else, because it should actually test there are no orphaned blocks.

          Show
          Konstantin Shvachko added a comment - Tao could you please update your patch so that it doesn't change the order of imports in BlockManager.java. You might also want to update testMetaSaveWithOrphanedBlocks() - name it something else, because it should actually test there are no orphaned blocks.
          Hide
          Tao Luo added a comment -

          Konstantin, the test has been renamed to testMetasaveAfterDelete() and the import ordering has been fixed.

          Show
          Tao Luo added a comment - Konstantin, the test has been renamed to testMetasaveAfterDelete() and the import ordering has been fixed.
          Hide
          Konstantin Shvachko added a comment -

          +1 looks good.
          Could you make a patch for branch-2 as well?

          Show
          Konstantin Shvachko added a comment - +1 looks good. Could you make a patch for branch-2 as well?
          Hide
          Tao Luo added a comment -

          Branch-2 patch

          Show
          Tao Luo added a comment - Branch-2 patch
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12586626/HDFS-4878.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4494//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4494//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586626/HDFS-4878.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4494//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4494//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12586654/HDFS-4878_branch2.patch
          against trunk revision .

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4495//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586654/HDFS-4878_branch2.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4495//console This message is automatically generated.
          Hide
          Ravi Prakash added a comment -

          Do we need to call decrementReplicationIndex() after the remove? I am not sure I understand the whole mechanism, but could you please check?

          Show
          Ravi Prakash added a comment - Do we need to call decrementReplicationIndex() after the remove? I am not sure I understand the whole mechanism, but could you please check?
          Hide
          Plamen Jeliazkov added a comment -

          The only place where the replicationIndex is used is for this segment in computeUnderReplicatedBlocks(int).

          Integer replIndex = priorityToReplIdx.get(priority);
                
                // skip to the first unprocessed block, which is at replIndex
                for (int i = 0; i < replIndex && neededReplicationsIterator.hasNext(); i++) {
                  neededReplicationsIterator.next();
                }
          

          Because our condition includes .hasNext() we will stop before trying to go outside our iterator. So we are safe with this change.

          The reason we cannot decrementReplicationIndex(), or why it would be difficult to, is because we cannot guarantee that we will have actually removed the block from any SPECIFIC priority from within neededReplications. We do not know what the priority level of this block is within neededReplications when we try to do BlockManager.removeBlock(Block). If you look at remove(Block, int) in UnderReplicatedBlocks, it will try to remove it from the first queue, and if that fails, then tries to remove from all the queues. Therefore we will not be able to determine WHICH replicationIndex to decrement appropriately. However, as noted above, we are safe-guarded by the iterator nonetheless.

          Now with that being said, I can understand from a consistency point as to why we would want to decrement the replicationIndex. However, in order to do it, we will need to add a method in UnderReplicatedBlocks such that we attempt to remove from a priority, and return true ONLY if we actually removed it from that priority; then we can safely decrement the replicationIndex of that priority.

          Show
          Plamen Jeliazkov added a comment - The only place where the replicationIndex is used is for this segment in computeUnderReplicatedBlocks(int). Integer replIndex = priorityToReplIdx.get(priority); // skip to the first unprocessed block, which is at replIndex for ( int i = 0; i < replIndex && neededReplicationsIterator.hasNext(); i++) { neededReplicationsIterator.next(); } Because our condition includes .hasNext() we will stop before trying to go outside our iterator. So we are safe with this change. The reason we cannot decrementReplicationIndex(), or why it would be difficult to, is because we cannot guarantee that we will have actually removed the block from any SPECIFIC priority from within neededReplications. We do not know what the priority level of this block is within neededReplications when we try to do BlockManager.removeBlock(Block). If you look at remove(Block, int) in UnderReplicatedBlocks, it will try to remove it from the first queue, and if that fails, then tries to remove from all the queues. Therefore we will not be able to determine WHICH replicationIndex to decrement appropriately. However, as noted above, we are safe-guarded by the iterator nonetheless. Now with that being said, I can understand from a consistency point as to why we would want to decrement the replicationIndex. However, in order to do it, we will need to add a method in UnderReplicatedBlocks such that we attempt to remove from a priority, and return true ONLY if we actually removed it from that priority; then we can safely decrement the replicationIndex of that priority.
          Hide
          Ravi Prakash added a comment -

          Hi Plamen! Thanks for your review.

          Because our condition includes .hasNext() we will stop before trying to go outside our iterator. So we are safe with this change.

          True! We won't throw an exception, but we would have skipped over a block. We won't get to this block again until the next iteration and a block with lower priority may be scheduled before a higher one. Its rather simple to fix this, so I would much rather that we did. (Although I noticed that there are already several instances in which we don't). Should we handle this in this JIRA / another?

          Could we refactor UnderReplicatedBlocks so that decrementReplicationIndex is called from within remove()? I would be a +1 for that change.

          Show
          Ravi Prakash added a comment - Hi Plamen! Thanks for your review. Because our condition includes .hasNext() we will stop before trying to go outside our iterator. So we are safe with this change. True! We won't throw an exception, but we would have skipped over a block. We won't get to this block again until the next iteration and a block with lower priority may be scheduled before a higher one. Its rather simple to fix this, so I would much rather that we did. (Although I noticed that there are already several instances in which we don't). Should we handle this in this JIRA / another? Could we refactor UnderReplicatedBlocks so that decrementReplicationIndex is called from within remove()? I would be a +1 for that change.
          Hide
          Tao Luo added a comment -

          Hi Ravi.
          Thanks for your suggestion. I was thinking along the same lines, and have uploaded the patch.

          Show
          Tao Luo added a comment - Hi Ravi. Thanks for your suggestion. I was thinking along the same lines, and have uploaded the patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12587154/HDFS-4878_branch2.patch
          against trunk revision .

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4504//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12587154/HDFS-4878_branch2.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4504//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12587148/HDFS-4878.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4503//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4503//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12587148/HDFS-4878.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4503//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4503//console This message is automatically generated.
          Hide
          Konstantin Shvachko added a comment -

          Strictly speaking we should decrementReplicationIndex() if we remove the block at a position before the replicationIndex, and should not decrement if the block position is after the replicationIndex. In general we don't know the position of the block in the queue, so we cannot know whether the index should be decremented.
          It should be targeted in a different jira if desired. The worst thing that happens if we don't change replicationIndex is that some blocks will be replicated later, which happens with some blocks one way or another. Eventually everything will be processed and that is what's important.
          I'd rather commit the original patches.

          Show
          Konstantin Shvachko added a comment - Strictly speaking we should decrementReplicationIndex() if we remove the block at a position before the replicationIndex, and should not decrement if the block position is after the replicationIndex. In general we don't know the position of the block in the queue, so we cannot know whether the index should be decremented. It should be targeted in a different jira if desired. The worst thing that happens if we don't change replicationIndex is that some blocks will be replicated later, which happens with some blocks one way or another. Eventually everything will be processed and that is what's important. I'd rather commit the original patches.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12587159/HDFS-4878.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4505//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4505//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12587159/HDFS-4878.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4505//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4505//console This message is automatically generated.
          Hide
          Konstantin Shvachko added a comment -

          I just committed this. Thank you Tao.

          Show
          Konstantin Shvachko added a comment - I just committed this. Thank you Tao.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-trunk-Commit #3893 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3893/)
          HDFS-4878. On Remove Block, block is not removed from neededReplications queue. Contributed by Tao Luo. (Revision 1491671)

          Result = SUCCESS
          shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491671
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
          Show
          Hudson added a comment - Integrated in Hadoop-trunk-Commit #3893 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3893/ ) HDFS-4878 . On Remove Block, block is not removed from neededReplications queue. Contributed by Tao Luo. (Revision 1491671) Result = SUCCESS shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491671 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Yarn-trunk #237 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/237/)
          HDFS-4878. On Remove Block, block is not removed from neededReplications queue. Contributed by Tao Luo. (Revision 1491671)

          Result = SUCCESS
          shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491671
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
          Show
          Hudson added a comment - Integrated in Hadoop-Yarn-trunk #237 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/237/ ) HDFS-4878 . On Remove Block, block is not removed from neededReplications queue. Contributed by Tao Luo. (Revision 1491671) Result = SUCCESS shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491671 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #1454 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1454/)
          HDFS-4878. On Remove Block, block is not removed from neededReplications queue. Contributed by Tao Luo. (Revision 1491671)

          Result = SUCCESS
          shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491671
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1454 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1454/ ) HDFS-4878 . On Remove Block, block is not removed from neededReplications queue. Contributed by Tao Luo. (Revision 1491671) Result = SUCCESS shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491671 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
          Hide
          Ravi Prakash added a comment -

          Konstantin! Could you please check in this port to branch-0.23?

          Show
          Ravi Prakash added a comment - Konstantin! Could you please check in this port to branch-0.23?
          Hide
          Ravi Prakash added a comment -

          Re-opening for branch-0.23

          Show
          Ravi Prakash added a comment - Re-opening for branch-0.23
          Hide
          Konstantin Shvachko added a comment -

          Sure will do. Thanks for porting.

          Show
          Konstantin Shvachko added a comment - Sure will do. Thanks for porting.
          Hide
          Konstantin Shvachko added a comment -

          Committed to branch 0.23 thanks to Ravi.

          Show
          Konstantin Shvachko added a comment - Committed to branch 0.23 thanks to Ravi.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-trunk-Commit #3910 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3910/)
          Port HDFS-4878 to branch-0.23.9. (Revision 1492449)

          Result = SUCCESS
          shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1492449
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-trunk-Commit #3910 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3910/ ) Port HDFS-4878 to branch-0.23.9. (Revision 1492449) Result = SUCCESS shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1492449 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Yarn-trunk #239 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/239/)
          Port HDFS-4878 to branch-0.23.9. (Revision 1492449)

          Result = SUCCESS
          shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1492449
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Yarn-trunk #239 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/239/ ) Port HDFS-4878 to branch-0.23.9. (Revision 1492449) Result = SUCCESS shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1492449 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-0.23-Build #637 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/637/)
          HDFS-4878. On Remove Block, block is not removed from neededReplications queue. Contributed by Tao Luo. (Revision 1492448)

          Result = SUCCESS
          shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1492448
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #637 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/637/ ) HDFS-4878 . On Remove Block, block is not removed from neededReplications queue. Contributed by Tao Luo. (Revision 1492448) Result = SUCCESS shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1492448 Files : /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #1429 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1429/)
          Port HDFS-4878 to branch-0.23.9. (Revision 1492449)

          Result = FAILURE
          shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1492449
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1429 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1429/ ) Port HDFS-4878 to branch-0.23.9. (Revision 1492449) Result = FAILURE shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1492449 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #1456 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1456/)
          Port HDFS-4878 to branch-0.23.9. (Revision 1492449)

          Result = SUCCESS
          shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1492449
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1456 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1456/ ) Port HDFS-4878 to branch-0.23.9. (Revision 1492449) Result = SUCCESS shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1492449 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

            People

            • Assignee:
              Tao Luo
              Reporter:
              Tao Luo
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development