Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-3119

Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

    Details

    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      cluster setup:
      --------------

      1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB

      step1: write a file "filewrite.txt" of size 90bytes with sync(not closed)
      step2: change the replication factor to 1 using the command: "./hdfs dfs -setrep 1 /filewrite.txt"
      step3: close the file

      • At the NN side the file "Decreasing replication from 2 to 1 for /filewrite.txt" , logs has occured but the overreplicated blocks are not deleted even after the block report is sent from DN
      • while listing the file in the console using "./hdfs dfs -ls " the replication factor for that file is mentioned as 1
      • In fsck report for that files displays that the file is replicated to 2 datanodes
      1. HDFS-3119-1.patch
        5 kB
        Uma Maheswara Rao G
      2. HDFS-3119-1.patch
        5 kB
        Ashish Singhi
      3. HDFS-3119.patch
        0.9 kB
        Ashish Singhi

        Activity

        Hide
        Uma Maheswara Rao G added a comment -

        The block is not finalized yet, changing replication for this block may not be correct.

        Increasing/Decreasing the replication for the unfinalized block would not work as per my understanding.
        Simple case is, updating the pipeline for further write after the sync will be difficult if you change the replication for unfinalized blocks.

        do you agree?

        Show
        Uma Maheswara Rao G added a comment - The block is not finalized yet, changing replication for this block may not be correct. Increasing/Decreasing the replication for the unfinalized block would not work as per my understanding. Simple case is, updating the pipeline for further write after the sync will be difficult if you change the replication for unfinalized blocks. do you agree?
        Hide
        J.Andreina added a comment -

        I understood whatever u have commented and i agree with your comments

        But if the replication cannot be done for an unfinalized block
        then the log message : "Decreasing replication from 2 to 1 for /filewrite.txt" at the NN side when we decrease the replication for an unfinalized block can be avoided.

        Show
        J.Andreina added a comment - I understood whatever u have commented and i agree with your comments But if the replication cannot be done for an unfinalized block then the log message : "Decreasing replication from 2 to 1 for /filewrite.txt" at the NN side when we decrease the replication for an unfinalized block can be avoided.
        Hide
        Uma Maheswara Rao G added a comment -

        Replication is not per block. This is per file configuration.
        Also we will persist the replication factor in edits. So, looks this setTeplication will get applied properly only for completed blocks.
        But for the in-complete blocks, there won't be any blocks updated in blocksMap. So, nonExcess nodes would be 0. So, it can't remove any nodes at the moment.

        I am not sure whether we are checking for the replication count while completing the block.

        If we really need this to work,
        1) one way is to check the replication consistency while completing the block as new replication already persisted.

        (Or)

        2) completely we can restrict setReplication call for non-closed files?

        blockreport also may not detect this case.

        Let me recheck again if I am missing something.

        Show
        Uma Maheswara Rao G added a comment - Replication is not per block. This is per file configuration. Also we will persist the replication factor in edits. So, looks this setTeplication will get applied properly only for completed blocks. But for the in-complete blocks, there won't be any blocks updated in blocksMap. So, nonExcess nodes would be 0. So, it can't remove any nodes at the moment. I am not sure whether we are checking for the replication count while completing the block. If we really need this to work, 1) one way is to check the replication consistency while completing the block as new replication already persisted. (Or) 2) completely we can restrict setReplication call for non-closed files? blockreport also may not detect this case. Let me recheck again if I am missing something.
        Hide
        J.Andreina added a comment -

        Thanks Uma ill also check the same

        Show
        J.Andreina added a comment - Thanks Uma ill also check the same
        Hide
        Uma Maheswara Rao G added a comment -

        Actual problem is, we set the replication factor down from 2 to 1 and close the file.

        if complete call success with min replication factor 1 and after this only other DN's addStored blocks request comes, then that call can process the OverReplicated blocks. Because file might have moved already from FileUnderConstruction to finalized.

        The other case is, if the complete call success with 2 addStored blocks immediately before moving fileInodeUnderConstruction to finalized one, then no one will be there to process the overreplicated blocks.

        I feel the solution for this problem should be that, we have to add overreplicated check in BlockManager#checkReplication method. This will be called on complete file.

        current code is checking only neededReplications.

         public void checkReplication(Block block, int numExpectedReplicas) {
            // filter out containingNodes that are marked for decommission.
            NumberReplicas number = countNodes(block);
            if (isNeededReplication(block, numExpectedReplicas, number.liveReplicas())) { 
              neededReplications.add(block,
                                     number.liveReplicas(),
                                     number.decommissionedReplicas(),
                                     numExpectedReplicas);
            }
          }
        
        Show
        Uma Maheswara Rao G added a comment - Actual problem is, we set the replication factor down from 2 to 1 and close the file. if complete call success with min replication factor 1 and after this only other DN's addStored blocks request comes, then that call can process the OverReplicated blocks. Because file might have moved already from FileUnderConstruction to finalized. The other case is, if the complete call success with 2 addStored blocks immediately before moving fileInodeUnderConstruction to finalized one, then no one will be there to process the overreplicated blocks. I feel the solution for this problem should be that, we have to add overreplicated check in BlockManager#checkReplication method. This will be called on complete file. current code is checking only neededReplications. public void checkReplication(Block block, int numExpectedReplicas) { // filter out containingNodes that are marked for decommission. NumberReplicas number = countNodes(block); if (isNeededReplication(block, numExpectedReplicas, number.liveReplicas())) { neededReplications.add(block, number.liveReplicas(), number.decommissionedReplicas(), numExpectedReplicas); } }
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Hi J.Andreina,

        Have you waited some time between step2 and fsck? I want ask if the block remains over-replicated forever. Noted deleting replicas from over-replicated blocks is a low priority task in namenode.

        Show
        Tsz Wo Nicholas Sze added a comment - Hi J.Andreina, Have you waited some time between step2 and fsck? I want ask if the block remains over-replicated forever. Noted deleting replicas from over-replicated blocks is a low priority task in namenode.
        Hide
        Uma Maheswara Rao G added a comment -

        Nicholas, thanks a lot for taking a look. I tried this case in my cluster with only one block. In this case anyway this block itself should have priority.

        For your question.
        I don't see any OverReplicated processing from neededReplication priority Queues. We will just remove from needed replication queues. Am I missing?

        if (numEffectiveReplicas >= requiredReplication) {
                      if ( (pendingReplications.getNumReplicas(block) > 0) ||
                           (blockHasEnoughRacks(block)) ) {
                        neededReplications.remove(block, priority); // remove from neededReplications
                        neededReplications.decrementReplicationIndex(priority);
                        NameNode.stateChangeLog.info("BLOCK* "
                            + "Removing block " + block
                            + " from neededReplications as it has enough replicas.");
                        continue;
                      }
        

        processOverReplications is happening straight away from addStoredBlock and setReplication calls.

        Anyway let's see what happened in Andreina's cluster.

        Show
        Uma Maheswara Rao G added a comment - Nicholas, thanks a lot for taking a look. I tried this case in my cluster with only one block. In this case anyway this block itself should have priority. For your question. I don't see any OverReplicated processing from neededReplication priority Queues. We will just remove from needed replication queues. Am I missing? if (numEffectiveReplicas >= requiredReplication) { if ( (pendingReplications.getNumReplicas(block) > 0) || (blockHasEnoughRacks(block)) ) { neededReplications.remove(block, priority); // remove from neededReplications neededReplications.decrementReplicationIndex(priority); NameNode.stateChangeLog.info( "BLOCK* " + "Removing block " + block + " from neededReplications as it has enough replicas." ); continue ; } processOverReplications is happening straight away from addStoredBlock and setReplication calls. Anyway let's see what happened in Andreina's cluster.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Hi Uma, I see your point. I guess the replicas of over-replicated block will be deleted after the next block reports. If it is the case, then we don't have to do anything.

        Show
        Tsz Wo Nicholas Sze added a comment - Hi Uma, I see your point. I guess the replicas of over-replicated block will be deleted after the next block reports. If it is the case, then we don't have to do anything.
        Hide
        Uma Maheswara Rao G added a comment -

        But if the block already updated with target datanodes then we need not re-add the block from block report right.
        In this case block already updated in blocks map with locations. So, we need not select this block for toAdd list. So, call will not go to addStoredBlock again.

        //add replica if appropriate
            if (reportedState == ReplicaState.FINALIZED
                && storedBlock.findDatanode(dn) < 0) {
              toAdd.add(storedBlock);
            }
        

        I am just referring reportDiff flow here. Please correct me if I miss something here.

        Show
        Uma Maheswara Rao G added a comment - But if the block already updated with target datanodes then we need not re-add the block from block report right. In this case block already updated in blocks map with locations. So, we need not select this block for toAdd list. So, call will not go to addStoredBlock again. //add replica if appropriate if (reportedState == ReplicaState.FINALIZED && storedBlock.findDatanode(dn) < 0) { toAdd.add(storedBlock); } I am just referring reportDiff flow here. Please correct me if I miss something here.
        Hide
        J.Andreina added a comment -

        Hi,
        Even after the datanode has sent the block report many times, the overreplicated block is not deleted.
        Even if i execute fsck for that particular file after sometime the block remains overreplicated.

        Show
        J.Andreina added a comment - Hi, Even after the datanode has sent the block report many times, the overreplicated block is not deleted. Even if i execute fsck for that particular file after sometime the block remains overreplicated.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        If the block remains over-replicated forever, then it is a bug.

        Show
        Tsz Wo Nicholas Sze added a comment - If the block remains over-replicated forever, then it is a bug.
        Show
        Uma Maheswara Rao G added a comment - @Nicholas, I proposed the change in https://issues.apache.org/jira/browse/HDFS-3119?focusedCommentId=13240101&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13240101 Do you agree, that is reasonable?
        Hide
        Tsz Wo Nicholas Sze added a comment -

        > I don't see any OverReplicated processing from neededReplication priority Queues. We will just remove from needed replication queues. Am I missing?

        computeReplicationWork(..) only takes care about replication but not deletion. Deletion is done by computeInvalidateWork(..).

        addStoredBlock(..) does call processOverReplicatedBlock(..) but the values of numCurrentReplica or fileReplication may be incorrect. We should print them out for debugging.

        Show
        Tsz Wo Nicholas Sze added a comment - > I don't see any OverReplicated processing from neededReplication priority Queues. We will just remove from needed replication queues. Am I missing? computeReplicationWork(..) only takes care about replication but not deletion. Deletion is done by computeInvalidateWork(..). addStoredBlock(..) does call processOverReplicatedBlock(..) but the values of numCurrentReplica or fileReplication may be incorrect. We should print them out for debugging.
        Hide
        Uma Maheswara Rao G added a comment -

        addStoredBlock(..) does call processOverReplicatedBlock(..) but the values of numCurrentReplica or fileReplication may be incorrect. We should print them out for debugging.

        Here addStoredBlock did not perform processOverReplicatedBlock because, all DNs reported block before moving finalizing the fileInodeUnderConstruction. addStoredBlock will just return if the block is underConstruction stage. After this step there is no other way to process this processOverReplicatedBlock again. Only the i am feeling is to checkReplication after finalized the block. That is checking currently only for neededReplications not for OverReplication.

        This is reproducible. But issue will be random because of other scenario. If it meets min replication, it can finalize fileInodeUnderConstruction and then other addStoredBlocks can perform processOverReplicatedBlock. So, block can be invalidated.

        Show
        Uma Maheswara Rao G added a comment - addStoredBlock(..) does call processOverReplicatedBlock(..) but the values of numCurrentReplica or fileReplication may be incorrect. We should print them out for debugging. Here addStoredBlock did not perform processOverReplicatedBlock because, all DNs reported block before moving finalizing the fileInodeUnderConstruction. addStoredBlock will just return if the block is underConstruction stage. After this step there is no other way to process this processOverReplicatedBlock again. Only the i am feeling is to checkReplication after finalized the block. That is checking currently only for neededReplications not for OverReplication. This is reproducible. But issue will be random because of other scenario. If it meets min replication, it can finalize fileInodeUnderConstruction and then other addStoredBlocks can perform processOverReplicatedBlock. So, block can be invalidated.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        > Here addStoredBlock did not perform processOverReplicatedBlock because, all DNs reported block before moving finalizing the fileInodeUnderConstruction. ...

        Good point! If the block is in COMPLETE state, I think we could handle over/under-replicated cases.

        Show
        Tsz Wo Nicholas Sze added a comment - > Here addStoredBlock did not perform processOverReplicatedBlock because, all DNs reported block before moving finalizing the fileInodeUnderConstruction. ... Good point! If the block is in COMPLETE state, I think we could handle over/under-replicated cases.
        Hide
        Ashish Singhi added a comment -

        Based on the above Uma and Nicholas comments, I am uploading a patch.
        Uma/Nicholas: can u review this patch once.
        Will upload a test file in some moment.

        Show
        Ashish Singhi added a comment - Based on the above Uma and Nicholas comments, I am uploading a patch. Uma/Nicholas: can u review this patch once. Will upload a test file in some moment.
        Hide
        Ashish Singhi added a comment -

        Hi Brandon
        Can you assign this issue to me. As I am working on this many days.

        Show
        Ashish Singhi added a comment - Hi Brandon Can you assign this issue to me. As I am working on this many days.
        Hide
        Uma Maheswara Rao G added a comment -

        Brandon and Ashish, Thanks for your interest on this issue.

        @Ashish,
        Looks code formatting of your patch is wrong. Could you please update the patch with correct formatting in your next version of patch?
        Also please take a look at http://wiki.apache.org/hadoop/HowToContribute.

        Show
        Uma Maheswara Rao G added a comment - Brandon and Ashish, Thanks for your interest on this issue. @Ashish, Looks code formatting of your patch is wrong. Could you please update the patch with correct formatting in your next version of patch? Also please take a look at http://wiki.apache.org/hadoop/HowToContribute .
        Hide
        Brandon Li added a comment -

        Hi Ashish,
        Feel free to take it. I didn't find your name in assignee list so I just leave it unsigned.

        Show
        Brandon Li added a comment - Hi Ashish, Feel free to take it. I didn't find your name in assignee list so I just leave it unsigned.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Ashish, thanks for working on this! Assigned to you. Please try to add a unit test.

        Show
        Tsz Wo Nicholas Sze added a comment - Ashish, thanks for working on this! Assigned to you. Please try to add a unit test.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Instead of casting numExpectedReplicas to short, could you change the declarations to short? i.e.

        +++ hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java	(working copy)
        @@ -2767,7 +2767,7 @@
             }
           }
         
        -  public void checkReplication(Block block, int numExpectedReplicas) {
        +  public void checkReplication(Block block, short numExpectedReplicas) {
             // filter out containingNodes that are marked for decommission.
             NumberReplicas number = countNodes(block);
             if (isNeededReplication(block, numExpectedReplicas, number.liveReplicas())) { 
        ===================================================================
        --- hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java	(revision 1306664)
        +++ hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java	(working copy)
        @@ -2146,7 +2146,7 @@
            * replication factor, then insert them into neededReplication
            */
           private void checkReplicationFactor(INodeFile file) {
        -    int numExpectedReplicas = file.getReplication();
        +    short numExpectedReplicas = file.getReplication();
             Block[] pendingBlocks = file.getBlocks();
             int nrBlocks = pendingBlocks.length;
             for (int i = 0; i < nrBlocks; i++) {
        
        Show
        Tsz Wo Nicholas Sze added a comment - Instead of casting numExpectedReplicas to short, could you change the declarations to short? i.e. +++ hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java (working copy) @@ -2767,7 +2767,7 @@ } } - public void checkReplication(Block block, int numExpectedReplicas) { + public void checkReplication(Block block, short numExpectedReplicas) { // filter out containingNodes that are marked for decommission. NumberReplicas number = countNodes(block); if (isNeededReplication(block, numExpectedReplicas, number.liveReplicas())) { =================================================================== --- hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java (revision 1306664) +++ hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java (working copy) @@ -2146,7 +2146,7 @@ * replication factor, then insert them into neededReplication */ private void checkReplicationFactor(INodeFile file) { - int numExpectedReplicas = file.getReplication(); + short numExpectedReplicas = file.getReplication(); Block[] pendingBlocks = file.getBlocks(); int nrBlocks = pendingBlocks.length; for ( int i = 0; i < nrBlocks; i++) {
        Hide
        Brandon Li added a comment -

        It would be nice to update the method description of FSNamesystem.checkReplicationFactor to indicate it processes over-replication too.

        Show
        Brandon Li added a comment - It would be nice to update the method description of FSNamesystem.checkReplicationFactor to indicate it processes over-replication too.
        Hide
        Ashish Singhi added a comment -

        Thanks Brandon for being so kind.
        Thanks Uma, Nicholas and Brandon for patch review and comments.

        The latest patch fixes Uma, Nicholas and Brandon comments. Also added a test case for the patch.

        Thanks Uma for your off line help as well

        Show
        Ashish Singhi added a comment - Thanks Brandon for being so kind. Thanks Uma, Nicholas and Brandon for patch review and comments. The latest patch fixes Uma, Nicholas and Brandon comments. Also added a test case for the patch. Thanks Uma for your off line help as well
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12521139/HDFS-3119-1.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in .

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2168//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2168//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12521139/HDFS-3119-1.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2168//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2168//console This message is automatically generated.
        Hide
        Ashish Singhi added a comment -

        Hi All
        Uma/Nicholas/Brandon can u pls review this patch ?

        Show
        Ashish Singhi added a comment - Hi All Uma/Nicholas/Brandon can u pls review this patch ?
        Hide
        Uma Maheswara Rao G added a comment -

        Hi Ashish, Thanks a lot for working on this issue.

        I just reviewed the patch. Patch looks good to me. +1
        I will committ this patch tomorrow, if there are no other comments.

        @Nicholas/ Brandon, do you have any comments to address?

        Show
        Uma Maheswara Rao G added a comment - Hi Ashish, Thanks a lot for working on this issue. I just reviewed the patch. Patch looks good to me. +1 I will committ this patch tomorrow, if there are no other comments. @Nicholas/ Brandon, do you have any comments to address?
        Hide
        Ashish Singhi added a comment -

        @Uma: My pleasure! and Thanks for reviewing

        Show
        Ashish Singhi added a comment - @Uma: My pleasure! and Thanks for reviewing
        Hide
        Brandon Li added a comment -

        The patch looks good to me.

        Show
        Brandon Li added a comment - The patch looks good to me.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        +1 on the patch.

        Uma, I will leave this to you for committing it.

        Show
        Tsz Wo Nicholas Sze added a comment - +1 on the patch. Uma, I will leave this to you for committing it.
        Hide
        Ashish Singhi added a comment -

        Brandon and Nicholas thanks for reviewing.

        Show
        Ashish Singhi added a comment - Brandon and Nicholas thanks for reviewing.
        Hide
        Uma Maheswara Rao G added a comment -

        updated the same patch as Ashish, re-based on trunk.

        Show
        Uma Maheswara Rao G added a comment - updated the same patch as Ashish, re-based on trunk.
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Common-trunk-Commit #2029 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2029/)
        HDFS-3119. Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file. Contributed by Ashish Singhi. (Revision 1311380)

        Result = SUCCESS
        umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Show
        Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #2029 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2029/ ) HDFS-3119 . Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file. Contributed by Ashish Singhi. (Revision 1311380) Result = SUCCESS umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Hide
        Uma Maheswara Rao G added a comment -

        I have committed this to trunk and branch-2. Thanks a lot Ashish for your contribution.
        Thanks a lot Nicholas and Brandon for reviews!

        Show
        Uma Maheswara Rao G added a comment - I have committed this to trunk and branch-2. Thanks a lot Ashish for your contribution. Thanks a lot Nicholas and Brandon for reviews!
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk-Commit #2104 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2104/)
        HDFS-3119. Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file. Contributed by Ashish Singhi. (Revision 1311380)

        Result = SUCCESS
        umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2104 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2104/ ) HDFS-3119 . Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file. Contributed by Ashish Singhi. (Revision 1311380) Result = SUCCESS umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk-Commit #2040 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2040/)
        HDFS-3119. Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file. Contributed by Ashish Singhi. (Revision 1311380)

        Result = ABORTED
        umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #2040 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2040/ ) HDFS-3119 . Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file. Contributed by Ashish Singhi. (Revision 1311380) Result = ABORTED umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk #1010 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1010/)
        HDFS-3119. Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file. Contributed by Ashish Singhi. (Revision 1311380)

        Result = FAILURE
        umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1010 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1010/ ) HDFS-3119 . Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file. Contributed by Ashish Singhi. (Revision 1311380) Result = FAILURE umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #1045 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1045/)
        HDFS-3119. Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file. Contributed by Ashish Singhi. (Revision 1311380)

        Result = SUCCESS
        umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1045 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1045/ ) HDFS-3119 . Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file. Contributed by Ashish Singhi. (Revision 1311380) Result = SUCCESS umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Hide
        Kihwal Lee added a comment -

        Committed to branch-0.23.

        Show
        Kihwal Lee added a comment - Committed to branch-0.23.
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-0.23-Build #513 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/513/)
        merge -r 1311379:1311380 Merging from trunk to branch-0.23 to fix HDFS-3119 (Revision 1441656)

        Result = SUCCESS
        kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1441656
        Files :

        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #513 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/513/ ) merge -r 1311379:1311380 Merging from trunk to branch-0.23 to fix HDFS-3119 (Revision 1441656) Result = SUCCESS kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1441656 Files : /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java

          People

          • Assignee:
            Ashish Singhi
            Reporter:
            J.Andreina
          • Votes:
            0 Vote for this issue
            Watchers:
            12 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development