Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-4270

Replications of the highest priority should be allowed to choose a source datanode that has reached its max replication limit

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 3.0.0, 0.23.5
    • Fix Version/s: 3.0.0, 2.0.3-alpha, 0.23.6
    • Component/s: namenode
    • Labels:
      None

      Description

      Blocks that have been identified as under-replicated are placed on one of several priority queues. The highest priority queue is essentially reserved for situations in which only one replica of the block exists, meaning it should be replicated ASAP.

      The ReplicationMonitor periodically computes replication work, and a call to BlockManager#chooseUnderReplicatedBlocks selects a given number of under-replicated blocks, choosing blocks from the highest-priority queue first and working down to the lowest priority queue.

      In the subsequent call to BlockManager#computeReplicationWorkForBlocks, a source for the replication is chosen from among datanodes that have an available copy of the block needed. This is done in BlockManager#chooseSourceDatanode.

      chooseSourceDatanode's job is to choose the datanode for replication. It chooses a random datanode from the available datanodes that has not reached its replication limit (preferring datanodes that are currently decommissioning).

      However, the priority queue of the block does not inform the logic. If a datanode holds the last remaining replica of a block and has already reached its replication limit, the node is dismissed outright and the replication is not scheduled.

      In some situations, this could lead to data loss, as the last remaining replica could disappear if an opportunity is not taken to schedule a replication. It would be better to waive the max replication limit in cases of highest-priority block replication.

      1. HDFS-4270.patch
        11 kB
        Derek Dagit
      2. HDFS-4270-branch-0.23.patch
        10 kB
        Derek Dagit
      3. HDFS-4270.patch
        7 kB
        Derek Dagit
      4. HDFS-4270.patch
        7 kB
        Derek Dagit
      5. HDFS-4270.patch
        7 kB
        Derek Dagit
      6. HDFS-4270-branch-0.23.patch
        6 kB
        Derek Dagit

        Activity

        Derek Dagit created issue -
        Derek Dagit made changes -
        Field Original Value New Value
        Attachment HDFS-4270-branch-0.23.patch [ 12556079 ]
        Derek Dagit made changes -
        Attachment HDFS-4270.patch [ 12556080 ]
        Derek Dagit made changes -
        Attachment HDFS-4270.patch [ 12556080 ]
        Derek Dagit made changes -
        Attachment HDFS-4270-branch-0.23.patch [ 12556079 ]
        Derek Dagit made changes -
        Attachment HDFS-4270-branch-0.23.patch [ 12556115 ]
        Derek Dagit made changes -
        Attachment HDFS-4270.patch [ 12556116 ]
        Hide
        Derek Dagit added a comment -

        New patches add a second assert to the test, and fix some formatting/readability issues.

        Show
        Derek Dagit added a comment - New patches add a second assert to the test, and fix some formatting/readability issues.
        Hide
        Aaron T. Myers added a comment -

        Marking patch available for Derek so that test-patch runs.

        Derek - minor thing, but please don't set the "fix versions" field until the patch is actually committed. Before then, setting the affects/targets versions fields is sufficient.

        Show
        Aaron T. Myers added a comment - Marking patch available for Derek so that test-patch runs. Derek - minor thing, but please don't set the "fix versions" field until the patch is actually committed. Before then, setting the affects/targets versions fields is sufficient.
        Aaron T. Myers made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Fix Version/s 3.0.0 [ 12320356 ]
        Fix Version/s 2.0.3-alpha [ 12323274 ]
        Fix Version/s 0.23.6 [ 12323503 ]
        Hide
        Tsz Wo Nicholas Sze added a comment -

        I think we still need a limit for highest priority replications. Otherwise, there could be a large number of replications schedule to a datanode and then nothing can be done. How about adding a (hard) limit for highest priority replication? The current conf is a soft limit. Only highest priority replications can pass the it.

        Show
        Tsz Wo Nicholas Sze added a comment - I think we still need a limit for highest priority replications. Otherwise, there could be a large number of replications schedule to a datanode and then nothing can be done. How about adding a (hard) limit for highest priority replication? The current conf is a soft limit. Only highest priority replications can pass the it.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12556116/HDFS-4270.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        -1 javadoc. The javadoc tool appears to have generated -6 warning messages.

        -1 eclipse:eclipse. The patch failed to build with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.server.namenode.TestEditLog

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3600//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3600//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12556116/HDFS-4270.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated -6 warning messages. -1 eclipse:eclipse . The patch failed to build with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.TestEditLog +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3600//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3600//console This message is automatically generated.
        Hide
        Daryn Sharp added a comment -

        I don't fully understand this subsystem, but I'm a bit torn over limiting the "only 1 block left" replications. This should be a rare event, but it's a critical situation when it does. I'm unclear if the max repls is compared against inflight or queued repls. If the latter, perhaps higher prio blocks should displace already queued blocks for that DN? If the "only 1 block left" repls are subjected to a new hard limit, is there an issue with how quickly the monitor will cycle back to schedule the critical blocks?

        Based on an actual incident: we lost most of a rack, then just so happened to lose the third DN before replication occurred. A lot of nodes were being decommissioned, which appears to have delayed replication after the first DN was lost and again after the second DN on the rack was lost. The third DN's disk with the remaining replica died hours later, and was decommissioned with no notification that the block was lost. There may be more bugs involved, but this seemed like an obvious fix to mitigate the risk.

        Show
        Daryn Sharp added a comment - I don't fully understand this subsystem, but I'm a bit torn over limiting the "only 1 block left" replications. This should be a rare event, but it's a critical situation when it does. I'm unclear if the max repls is compared against inflight or queued repls. If the latter, perhaps higher prio blocks should displace already queued blocks for that DN? If the "only 1 block left" repls are subjected to a new hard limit, is there an issue with how quickly the monitor will cycle back to schedule the critical blocks? Based on an actual incident: we lost most of a rack, then just so happened to lose the third DN before replication occurred. A lot of nodes were being decommissioned, which appears to have delayed replication after the first DN was lost and again after the second DN on the rack was lost. The third DN's disk with the remaining replica died hours later, and was decommissioned with no notification that the block was lost. There may be more bugs involved, but this seemed like an obvious fix to mitigate the risk.
        Hide
        Derek Dagit added a comment -

        Hi Aaron,

        Noted. I meant to remove the entries in fixed, but I must have forgot. I'll try not to do that in the future.

        Hi Nicholas,

        I do see the concern over making this case unbounded, for the sake of the NN.

        I am interested, historically why was the default limit set to 2?

        Show
        Derek Dagit added a comment - Hi Aaron, Noted. I meant to remove the entries in fixed, but I must have forgot. I'll try not to do that in the future. Hi Nicholas, I do see the concern over making this case unbounded, for the sake of the NN. I am interested, historically why was the default limit set to 2?
        Hide
        Derek Dagit added a comment -

        Canceling patch for now.

        Later after discussion, I'll fix the errors and upload new patches.

        Show
        Derek Dagit added a comment - Canceling patch for now. Later after discussion, I'll fix the errors and upload new patches.
        Derek Dagit made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        Tsz Wo Nicholas Sze added a comment -

        > ..., historically why was the default limit set to 2?

        I actually don't know why the default is 2. Let me check.

        Show
        Tsz Wo Nicholas Sze added a comment - > ..., historically why was the default limit set to 2? I actually don't know why the default is 2. Let me check.
        Hide
        Derek Dagit added a comment -

        Manually ran eclipse:eclipse -> works for me

        Manually ran TestEditLog.testFuzzSequences -> passes for me

        Javadoc error was a build error: connection timed out while fetching a dependency.

        Show
        Derek Dagit added a comment - Manually ran eclipse:eclipse -> works for me Manually ran TestEditLog.testFuzzSequences -> passes for me Javadoc error was a build error: connection timed out while fetching a dependency.
        Hide
        Derek Dagit added a comment -

        Re-attaching patch, as the errors reported by the bot appear to be transient.

        Show
        Derek Dagit added a comment - Re-attaching patch, as the errors reported by the bot appear to be transient.
        Derek Dagit made changes -
        Attachment HDFS-4270.patch [ 12560296 ]
        Derek Dagit made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12560296/HDFS-4270.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3632//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3632//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12560296/HDFS-4270.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3632//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3632//console This message is automatically generated.
        Hide
        Derek Dagit added a comment -

        Reattaching patch again. The balancer test failure looks like another spurious result.

        Show
        Derek Dagit added a comment - Reattaching patch again. The balancer test failure looks like another spurious result.
        Derek Dagit made changes -
        Attachment HDFS-4270.patch [ 12560368 ]
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12560368/HDFS-4270.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3634//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3634//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12560368/HDFS-4270.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3634//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3634//console This message is automatically generated.
        Hide
        Derek Dagit added a comment -

        If we consider a hard limit, what should the limit be?

        Show
        Derek Dagit added a comment - If we consider a hard limit, what should the limit be?
        Hide
        Tsz Wo Nicholas Sze added a comment -

        How about making it configurable and setting the default to 4?

        Show
        Tsz Wo Nicholas Sze added a comment - How about making it configurable and setting the default to 4?
        Hide
        Derek Dagit added a comment -

        New hard-limit config, defaults to 4.

        Show
        Derek Dagit added a comment - New hard-limit config, defaults to 4.
        Derek Dagit made changes -
        Attachment HDFS-4270-branch-0.23.patch [ 12561918 ]
        Hide
        Derek Dagit added a comment -

        New hard-limit config, defaults to 4.

        Show
        Derek Dagit added a comment - New hard-limit config, defaults to 4.
        Derek Dagit made changes -
        Attachment HDFS-4270.patch [ 12561923 ]
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12561923/HDFS-4270.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3684//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3684//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12561923/HDFS-4270.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3684//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3684//console This message is automatically generated.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        +1 patch looks good.

        Show
        Tsz Wo Nicholas Sze added a comment - +1 patch looks good.
        Tsz Wo Nicholas Sze made changes -
        Hadoop Flags Reviewed [ 10343 ]
        Hide
        Tsz Wo Nicholas Sze added a comment -

        I have committed this. Thanks, Derek!

        Show
        Tsz Wo Nicholas Sze added a comment - I have committed this. Thanks, Derek!
        Tsz Wo Nicholas Sze made changes -
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Fix Version/s 3.0.0 [ 12320356 ]
        Fix Version/s 2.0.3-alpha [ 12323274 ]
        Resolution Fixed [ 1 ]
        Hide
        Hudson added a comment -

        Integrated in Hadoop-trunk-Commit #3174 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3174/)
        HDFS-4270. Introduce soft and hard limits for max replication so that replications of the highest priority are allowed to choose a source datanode that has reached its soft limit but not the hard limit. Contributed by Derek Dagit (Revision 1428739)

        Result = FAILURE
        szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1428739
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
        Show
        Hudson added a comment - Integrated in Hadoop-trunk-Commit #3174 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3174/ ) HDFS-4270 . Introduce soft and hard limits for max replication so that replications of the highest priority are allowed to choose a source datanode that has reached its soft limit but not the hard limit. Contributed by Derek Dagit (Revision 1428739) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1428739 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Yarn-trunk #86 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/86/)
        HDFS-4270. Introduce soft and hard limits for max replication so that replications of the highest priority are allowed to choose a source datanode that has reached its soft limit but not the hard limit. Contributed by Derek Dagit (Revision 1428739)

        Result = FAILURE
        szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1428739
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
        Show
        Hudson added a comment - Integrated in Hadoop-Yarn-trunk #86 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/86/ ) HDFS-4270 . Introduce soft and hard limits for max replication so that replications of the highest priority are allowed to choose a source datanode that has reached its soft limit but not the hard limit. Contributed by Derek Dagit (Revision 1428739) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1428739 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk #1275 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1275/)
        HDFS-4270. Introduce soft and hard limits for max replication so that replications of the highest priority are allowed to choose a source datanode that has reached its soft limit but not the hard limit. Contributed by Derek Dagit (Revision 1428739)

        Result = FAILURE
        szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1428739
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1275 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1275/ ) HDFS-4270 . Introduce soft and hard limits for max replication so that replications of the highest priority are allowed to choose a source datanode that has reached its soft limit but not the hard limit. Contributed by Derek Dagit (Revision 1428739) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1428739 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #1305 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1305/)
        HDFS-4270. Introduce soft and hard limits for max replication so that replications of the highest priority are allowed to choose a source datanode that has reached its soft limit but not the hard limit. Contributed by Derek Dagit (Revision 1428739)

        Result = SUCCESS
        szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1428739
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1305 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1305/ ) HDFS-4270 . Introduce soft and hard limits for max replication so that replications of the highest priority are allowed to choose a source datanode that has reached its soft limit but not the hard limit. Contributed by Derek Dagit (Revision 1428739) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1428739 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
        Hide
        Thomas Graves added a comment -

        I pulled this into branch-0.23

        Show
        Thomas Graves added a comment - I pulled this into branch-0.23
        Thomas Graves made changes -
        Fix Version/s 0.23.6 [ 12323503 ]
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-0.23-Build #485 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/485/)
        HDFS-4270. Replications of the highest priority should be allowed to choose a source datanode that has reached its max replication limit (Derek Dagit via tgraves) (Revision 1428883)

        Result = FAILURE
        tgraves : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1428883
        Files :

        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #485 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/485/ ) HDFS-4270 . Replications of the highest priority should be allowed to choose a source datanode that has reached its max replication limit (Derek Dagit via tgraves) (Revision 1428883) Result = FAILURE tgraves : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1428883 Files : /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
        Arun C Murthy made changes -
        Status Resolved [ 5 ] Closed [ 6 ]

          People

          • Assignee:
            Derek Dagit
            Reporter:
            Derek Dagit
          • Votes:
            0 Vote for this issue
            Watchers:
            12 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development