HBase
  1. HBase
  2. HBASE-10873

Control number of regions assigned to backup masters

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.99.0
    • Component/s: Balancer
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      By default, a backup master is treated just like another regionserver. So it can host as many regions as other regionserver does. When the backup master becomes the active one, region balancer needs to move those user regions on this master to other region servers. To minimize the impact, it's better not to assign too many regions on backup masters. It may not be good to leave the backup masters idle and not host any region either.

      We should make this adjustable so that users can control how many regions to assign to each backup master.

      1. hbase-10873_v3.patch
        45 kB
        Jimmy Xiang
      2. hbase-10873_v2.patch
        44 kB
        Jimmy Xiang
      3. hbase-10873.patch
        42 kB
        Jimmy Xiang

        Issue Links

          Activity

          Hide
          Jimmy Xiang added a comment -

          Attached the first patch. It's also on RB: https://reviews.apache.org/r/20088/

          This patch introduced a weight concept for user regions assigned to active/backup master. Adjusting the weight can control the number of user regions assigned to backup masters. The weight logic is used for 1) region balancing, 2) round robin assignment, 3) random assignment. When retaining assignment finds the original server is not avaiable, the same random assignment logic based on weight is used to choose a new server.

          Show
          Jimmy Xiang added a comment - Attached the first patch. It's also on RB: https://reviews.apache.org/r/20088/ This patch introduced a weight concept for user regions assigned to active/backup master. Adjusting the weight can control the number of user regions assigned to backup masters. The weight logic is used for 1) region balancing, 2) round robin assignment, 3) random assignment. When retaining assignment finds the original server is not avaiable, the same random assignment logic based on weight is used to choose a new server.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12639016/hbase-10873.patch
          against trunk revision .
          ATTACHMENT ID: 12639016

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 12 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.procedure.TestZKProcedure

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12639016/hbase-10873.patch against trunk revision . ATTACHMENT ID: 12639016 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 12 new or modified tests. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.procedure.TestZKProcedure Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//console This message is automatically generated.
          Hide
          Jimmy Xiang added a comment -

          Attached patch v2 that addressed Matteo's review comments. Thanks.

          Show
          Jimmy Xiang added a comment - Attached patch v2 that addressed Matteo's review comments. Thanks.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12640327/hbase-10873_v2.patch
          against trunk revision .
          ATTACHMENT ID: 12640327

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 12 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12640327/hbase-10873_v2.patch against trunk revision . ATTACHMENT ID: 12640327 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 12 new or modified tests. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9297//console This message is automatically generated.
          Hide
          Matteo Bertozzi added a comment -

          +1, v2 looks good to me

          Show
          Matteo Bertozzi added a comment - +1, v2 looks good to me
          Hide
          Enis Soztutar added a comment -

          Is this a good idea? Today META/ns tables are already shared with the other tables, and there is little special treatment for the region server hosting meta.
          In 0.98- deployments, in deployments where master and backup masters are co-located with region servers, this would basically reduce the capacity of the cluster. Otherwise in deployments where backup masters and active master is not co-located, the users will probably want backup masters to not host any region server (since it was the case already).

          I think we should do a all-or-nothing approach rather that this weight based config which is just yet another parameter to worry about.

          Show
          Enis Soztutar added a comment - Is this a good idea? Today META/ns tables are already shared with the other tables, and there is little special treatment for the region server hosting meta. In 0.98- deployments, in deployments where master and backup masters are co-located with region servers, this would basically reduce the capacity of the cluster. Otherwise in deployments where backup masters and active master is not co-located, the users will probably want backup masters to not host any region server (since it was the case already). I think we should do a all-or-nothing approach rather that this weight based config which is just yet another parameter to worry about.
          Hide
          Jimmy Xiang added a comment -

          Today META/ns tables are already shared with the other tables, and there is little special treatment for the region server hosting meta.

          I think that's the issue we'd like to enhance. We want the region server hosting meta doesn't host too many heavy regions so that we can improve the meta availability, right?

          I think we should do a all-or-nothing approach rather that this weight based config which is just yet another parameter to worry about.

          Currently, we already support all or nothing. With this patch, it will make it more flexible so that users don't have to leave those backup masters totally idle, they don't need to worry too much about region movement/data locality in case each region server hosts many regions.

          Show
          Jimmy Xiang added a comment - Today META/ns tables are already shared with the other tables, and there is little special treatment for the region server hosting meta. I think that's the issue we'd like to enhance. We want the region server hosting meta doesn't host too many heavy regions so that we can improve the meta availability, right? I think we should do a all-or-nothing approach rather that this weight based config which is just yet another parameter to worry about. Currently, we already support all or nothing. With this patch, it will make it more flexible so that users don't have to leave those backup masters totally idle, they don't need to worry too much about region movement/data locality in case each region server hosts many regions.
          Hide
          Jimmy Xiang added a comment -

          Enis Soztutar, what do you think? Are you ok to get this patch in? Thanks.

          Show
          Jimmy Xiang added a comment - Enis Soztutar , what do you think? Are you ok to get this patch in? Thanks.
          Hide
          Enis Soztutar added a comment -

          I think that's the issue we'd like to enhance. We want the region server hosting meta doesn't host too many heavy regions so that we can improve the meta availability, right?

          We already have separate priority handlers and separate wal, etc in place right? Plus the active master right now only hosts system tables, so this is not a problem for backup masters right? Are you concerned with the failover?

          Currently, we already support all or nothing. With this patch, it will make it more flexible so that users don't have to leave those backup masters totally idle, they don't need to worry too much about region movement/data locality in case each region server hosts many regions.

          Fair enough, but configuring this for a cluster is still not clear. What value should we default? What value should a 20 node cluster use, etc. It might be ok if we default to backups not hosting any regions. Then if the user prefers to use the capacity, they can explicitly request so.

          Show
          Enis Soztutar added a comment - I think that's the issue we'd like to enhance. We want the region server hosting meta doesn't host too many heavy regions so that we can improve the meta availability, right? We already have separate priority handlers and separate wal, etc in place right? Plus the active master right now only hosts system tables, so this is not a problem for backup masters right? Are you concerned with the failover? Currently, we already support all or nothing. With this patch, it will make it more flexible so that users don't have to leave those backup masters totally idle, they don't need to worry too much about region movement/data locality in case each region server hosts many regions. Fair enough, but configuring this for a cluster is still not clear. What value should we default? What value should a 20 node cluster use, etc. It might be ok if we default to backups not hosting any regions. Then if the user prefers to use the capacity, they can explicitly request so.
          Hide
          Jimmy Xiang added a comment - - edited

          Are you concerned with the failover?

          Yes, that's the concern here. I was thinking later on, we can open the meta on the backup master as well after we support opening a region in multiple servers. This way, it will be much fast to fail over an active master change.

          As to the weight, basically, it means how many regions to host by a backup master compared to a normal region server. The default is 4, i.e. by default, the number of regions opened on a backup master is 1/4 of the number of regions opened on a normal region server. Should we set the default value to 1 (backup master is treated the same as a normal region server) or a big number (less regions)? If big, how big should be proper?

          If we set the weight to 1, it means there is no default behavior change: you can just ignore the new configuration parameter; in the meantime, others can fine-tune it and get more flexibility if needed.

          Show
          Jimmy Xiang added a comment - - edited Are you concerned with the failover? Yes, that's the concern here. I was thinking later on, we can open the meta on the backup master as well after we support opening a region in multiple servers. This way, it will be much fast to fail over an active master change. As to the weight, basically, it means how many regions to host by a backup master compared to a normal region server. The default is 4, i.e. by default, the number of regions opened on a backup master is 1/4 of the number of regions opened on a normal region server. Should we set the default value to 1 (backup master is treated the same as a normal region server) or a big number (less regions)? If big, how big should be proper? If we set the weight to 1, it means there is no default behavior change: you can just ignore the new configuration parameter; in the meantime, others can fine-tune it and get more flexibility if needed.
          Hide
          Enis Soztutar added a comment -

          Should we set the default value to 1 (backup master is treated the same as a normal region server) or a big number (less regions)? If big, how big should be proper?

          I was thinking of setting the value to 0, which gives us the 0.98- behavior of backup masters not hosting any regions. If users want to use extra capacity they can enable this setting manually.

          Show
          Enis Soztutar added a comment - Should we set the default value to 1 (backup master is treated the same as a normal region server) or a big number (less regions)? If big, how big should be proper? I was thinking of setting the value to 0, which gives us the 0.98- behavior of backup masters not hosting any regions. If users want to use extra capacity they can enable this setting manually.
          Hide
          Jimmy Xiang added a comment -

          The patch is just for trunk. I thought we prefer 1.0 to have the new behavior.

          Show
          Jimmy Xiang added a comment - The patch is just for trunk. I thought we prefer 1.0 to have the new behavior.
          Hide
          stack added a comment -

          I was thinking of setting the value to 0, which gives us the 0.98- behavior of backup masters not hosting any regions. If users want to use extra capacity they can enable this setting manually.

          I suggest we let this issue in and then elsewhere discuss what 1.0 rolls out with elsewhere – in subtask on the 1.0 issue.

          I'd be in favor of master and backup masters carrying full complement of regions in 1.0 (if we can ensure that the master handlers are processed ahead of any others); i.e. more radical than Jimmy has it his adding support for master and backup masters carrying 'light' loadings. Sounds like you'd like us to replicate the old deploy layout Enis Soztutar but with option to move to the 'new'.

          Show
          stack added a comment - I was thinking of setting the value to 0, which gives us the 0.98- behavior of backup masters not hosting any regions. If users want to use extra capacity they can enable this setting manually. I suggest we let this issue in and then elsewhere discuss what 1.0 rolls out with elsewhere – in subtask on the 1.0 issue. I'd be in favor of master and backup masters carrying full complement of regions in 1.0 (if we can ensure that the master handlers are processed ahead of any others); i.e. more radical than Jimmy has it his adding support for master and backup masters carrying 'light' loadings. Sounds like you'd like us to replicate the old deploy layout Enis Soztutar but with option to move to the 'new'.
          Hide
          Enis Soztutar added a comment -

          Sounds like you'd like us to replicate the old deploy layout Enis Soztutar but with option to move to the 'new'.

          Yes. Least amount of surprises. Since we are right now defaulting to master having no user level regions, we should also default to backup masters not having any user level regions.
          It may be ok to change the defaults so that both active and backup masters have normal region load (weight 1), but I think we should be consistent for region loads of active and backup masters. Since META is already shared in 0.98- setups, it should be fine to have master be just another region server. In that case, we do not even need this issue?

          Show
          Enis Soztutar added a comment - Sounds like you'd like us to replicate the old deploy layout Enis Soztutar but with option to move to the 'new'. Yes. Least amount of surprises. Since we are right now defaulting to master having no user level regions, we should also default to backup masters not having any user level regions. It may be ok to change the defaults so that both active and backup masters have normal region load (weight 1), but I think we should be consistent for region loads of active and backup masters. Since META is already shared in 0.98- setups, it should be fine to have master be just another region server. In that case, we do not even need this issue?
          Hide
          stack added a comment -

          In that case, we do not even need this issue?

          Perhaps. I'm not sure how we currently keep backup masters 'clear'. Wouldn't we want it done in the LB? Or how is it done currently Jimmy Xiang? Thanks.

          Show
          stack added a comment - In that case, we do not even need this issue? Perhaps. I'm not sure how we currently keep backup masters 'clear'. Wouldn't we want it done in the LB? Or how is it done currently Jimmy Xiang ? Thanks.
          Hide
          Jimmy Xiang added a comment -

          In that case, we do not even need this issue?

          This patch makes it possible to put some load on backup masters. So that users can make adjustments per their need. I hope we don't need this issue, if not for the region moving concern during master failover.

          I'm not sure how we currently keep backup masters 'clear'. Wouldn't we want it done in the LB?

          Yes, it's done in the LB.

          Show
          Jimmy Xiang added a comment - In that case, we do not even need this issue? This patch makes it possible to put some load on backup masters. So that users can make adjustments per their need. I hope we don't need this issue, if not for the region moving concern during master failover. I'm not sure how we currently keep backup masters 'clear'. Wouldn't we want it done in the LB? Yes, it's done in the LB.
          Hide
          Jimmy Xiang added a comment -

          Can we get this in and deal with the default values in a followup issue?

          Show
          Jimmy Xiang added a comment - Can we get this in and deal with the default values in a followup issue?
          Hide
          stack added a comment -

          Good by me. Thinking on it, I think it'd be fine to flag in a release note that Master and BUMaster will carry full-complement of regions after upgrade (unless you configure it to work the old way). Having this option where *Master's carry a lighter loading would be a nice-to-have though I think it will be little used.

          Jimmy Xiang If we set it so backup masters should NOT carry regions, does it use this code path – the one in this patch?

          See what Enis Soztutar says.

          Show
          stack added a comment - Good by me. Thinking on it, I think it'd be fine to flag in a release note that Master and BUMaster will carry full-complement of regions after upgrade (unless you configure it to work the old way). Having this option where *Master's carry a lighter loading would be a nice-to-have though I think it will be little used. Jimmy Xiang If we set it so backup masters should NOT carry regions, does it use this code path – the one in this patch? See what Enis Soztutar says.
          Hide
          Jimmy Xiang added a comment -

          If we set it so backup masters should NOT carry regions, does it use this code path – the one in this patch?

          No, it doesn't use this patch. It is a different configuration. Probably it's better to do some consolidation and use just the configuration in this patch?

          Show
          Jimmy Xiang added a comment - If we set it so backup masters should NOT carry regions, does it use this code path – the one in this patch? No, it doesn't use this patch. It is a different configuration. Probably it's better to do some consolidation and use just the configuration in this patch?
          Hide
          stack added a comment -

          Probably it's better to do some consolidation and use just the configuration in this patch?

          It would make this patch indispensible (smile).

          Show
          stack added a comment - Probably it's better to do some consolidation and use just the configuration in this patch? It would make this patch indispensible (smile).
          Hide
          Jimmy Xiang added a comment -

          Attached patch v3 that uses only one configuration to control if assigning any regions to backup masters, if so, the number of regions to assign to backup masters relative to normal region servers. The default weight is 1, i.e. a backup master is treated the same as a normal region server in region assignment.

          Show
          Jimmy Xiang added a comment - Attached patch v3 that uses only one configuration to control if assigning any regions to backup masters, if so, the number of regions to assign to backup masters relative to normal region servers. The default weight is 1, i.e. a backup master is treated the same as a normal region server in region assignment.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12641492/hbase-10873_v3.patch
          against trunk revision .
          ATTACHMENT ID: 12641492

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 12 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.client.TestMultiParallel

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12641492/hbase-10873_v3.patch against trunk revision . ATTACHMENT ID: 12641492 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 12 new or modified tests. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.client.TestMultiParallel Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9372//console This message is automatically generated.
          Hide
          Jimmy Xiang added a comment -

          The test failure is not related. If no objection, I will commit v3 tomorrow. We can change the default setting in a separate issue if needed.

          Show
          Jimmy Xiang added a comment - The test failure is not related. If no objection, I will commit v3 tomorrow. We can change the default setting in a separate issue if needed.
          Hide
          Jimmy Xiang added a comment -

          Integrated into trunk. Thanks.

          Show
          Jimmy Xiang added a comment - Integrated into trunk. Thanks.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in HBase-TRUNK #5117 (See https://builds.apache.org/job/HBase-TRUNK/5117/)
          HBASE-10873 Control number of regions assigned to backup masters (jxiang: rev 1590078)

          • /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/ClusterLoadState.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredNodeLoadBalancer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/BalancerTestBase.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java
          Show
          Hudson added a comment - FAILURE: Integrated in HBase-TRUNK #5117 (See https://builds.apache.org/job/HBase-TRUNK/5117/ ) HBASE-10873 Control number of regions assigned to backup masters (jxiang: rev 1590078) /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/ClusterLoadState.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredNodeLoadBalancer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/BalancerTestBase.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java
          Hide
          Enis Soztutar added a comment -

          Closing this issue after 0.99.0 release.

          Show
          Enis Soztutar added a comment - Closing this issue after 0.99.0 release.

            People

            • Assignee:
              Jimmy Xiang
              Reporter:
              Jimmy Xiang
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development