HBase
  1. HBase
  2. HBASE-4395

EnableTableHandler races with itself

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.90.4
    • Fix Version/s: 0.90.5
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      EnableTableHandler used to call many enables on the table, now it calls it once and eventually times out if not all the regions are assigned and none is moving.

      Description

      Very often when we try to enable a big table we get something like:

      2011-09-02 12:21:56,619 FATAL org.apache.hadoop.hbase.master.HMaster: Unexpected state trying to OFFLINE; huge_ass_region_name state=PENDING_OPEN, ts=1314991316616
      java.lang.IllegalStateException
      at org.apache.hadoop.hbase.master.AssignmentManager.setOfflineInZooKeeper(AssignmentManager.java:1074)
      at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1030)
      at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:858)
      at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:838)
      at org.apache.hadoop.hbase.master.handler.EnableTableHandler$BulkEnabler$1.run(EnableTableHandler.java:154)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      at java.lang.Thread.run(Thread.java:662)
      2011-09-02 12:21:56,620 INFO org.apache.hadoop.hbase.master.HMaster: Aborting

      The issue is that EnableTableHandler calls multiple BulkEnabler and it's possible that by the time it calls it a second time, using a stale list of still-not-enabled regions, that it tries to set one region offline in ZK but just after its state changed. Case in point:

      2011-09-02 12:21:56,616 DEBUG org.apache.hadoop.hbase.master.AssignmentManager: Assigning region huge_ass_region_name to sv4r23s16,60020,1314880035029
      2011-09-02 12:21:56,619 FATAL org.apache.hadoop.hbase.master.HMaster: Unexpected state trying to OFFLINE; huge_ass_region_name state=PENDING_OPEN, ts=1314991316616

      Here the first line is the first assign done in the first thread, and the second line is the second thread that got to process the same region around the same time. 3ms difference in time. After that, the master dies, and it's pretty sad when it restarts because it failovers an enabling table and it's ungodly slow.

      I'm pretty sure there's a window where double assignment are possible.

      Talking with Stack, it doesn't really make sense to call multiple enables since the list of regions is static (the table is disabled!). We should just call it and wait. Also there's a lot of cleanup to do in EnableTableHandler since it refers to disabling the table (copy pasta I guess).

      1. HBASE-4395-trunk.patch
        3 kB
        Jean-Daniel Cryans
      2. HBASE-4395-0.90-v2.patch
        3 kB
        Jean-Daniel Cryans
      3. HBASE-4395-0.90.patch
        3 kB
        Jean-Daniel Cryans

        Activity

        Hide
        Jean-Daniel Cryans added a comment -

        Looking more into this, deep in the BulkEnabler it waitUntilDone and it has a timeout of 5 minutes. I guess the loop is the way we use to take action when we timeout. I don't think it's a good idea in hindsight.

        Since we already have access to both the number of regions in the table and the number of online regions, I think we could do a trick where we count the latter number in a loop and as long we make progress we don't timeout. If we do timeout, the best we can do is logging an angry message in the master log... I don't think looping forever is better, same for calling BulkEnabler multiple times.

        Show
        Jean-Daniel Cryans added a comment - Looking more into this, deep in the BulkEnabler it waitUntilDone and it has a timeout of 5 minutes. I guess the loop is the way we use to take action when we timeout. I don't think it's a good idea in hindsight. Since we already have access to both the number of regions in the table and the number of online regions, I think we could do a trick where we count the latter number in a loop and as long we make progress we don't timeout. If we do timeout, the best we can do is logging an angry message in the master log... I don't think looping forever is better, same for calling BulkEnabler multiple times.
        Hide
        stack added a comment -

        Looking more into this, deep in the BulkEnabler it waitUntilDone and it has a timeout of 5 minutes. I guess the loop is the way we use to take action when we timeout. I don't think it's a good idea in hindsight.

        Yeah. Notion was you'd turn off timeout while we got the regions out. After this configurable amount of time – 5 minutes being default – we'd then let the timeout monitor cut in to take care of any stragglers.

        That was original thinking.

        Since we already have access to both the number of regions in the table and the number of online regions, I think we could do a trick where we count the latter number in a loop and as long we make progress we don't timeout.

        This is a better idea.

        Show
        stack added a comment - Looking more into this, deep in the BulkEnabler it waitUntilDone and it has a timeout of 5 minutes. I guess the loop is the way we use to take action when we timeout. I don't think it's a good idea in hindsight. Yeah. Notion was you'd turn off timeout while we got the regions out. After this configurable amount of time – 5 minutes being default – we'd then let the timeout monitor cut in to take care of any stragglers. That was original thinking. Since we already have access to both the number of regions in the table and the number of online regions, I think we could do a trick where we count the latter number in a loop and as long we make progress we don't timeout. This is a better idea.
        Hide
        Jean-Daniel Cryans added a comment -

        Removes the loop and adds the timeout bumping. Tried it on a largish table (1k regions) and it worked fine. Also passes TestAdmin.

        Show
        Jean-Daniel Cryans added a comment - Removes the loop and adds the timeout bumping. Tried it on a largish table (1k regions) and it worked fine. Also passes TestAdmin.
        Hide
        Ted Yu added a comment -

        In the loop of waitUntilDone():

        +        // Punt on the timeout as long we make progress
        +        if (regions.size() < lastNumberOfRegions) {
        +          lastNumberOfRegions = regions.size();
        +          remaining += waitingTimeForEvents;
        +        }
                 remaining = timeout - (System.currentTimeMillis() - startTime);
        

        I think regions.size() should increase in the above loop. So I don't understand the condition for if above.
        Also, remaining is calculated lastly. I don't know why remaining is updated in the if block.

        Show
        Ted Yu added a comment - In the loop of waitUntilDone(): + // Punt on the timeout as long we make progress + if (regions.size() < lastNumberOfRegions) { + lastNumberOfRegions = regions.size(); + remaining += waitingTimeForEvents; + } remaining = timeout - ( System .currentTimeMillis() - startTime); I think regions.size() should increase in the above loop. So I don't understand the condition for if above. Also, remaining is calculated lastly. I don't know why remaining is updated in the if block.
        Hide
        Jean-Daniel Cryans added a comment -

        I think regions.size() should increase in the above loop. So I don't understand the condition for if above.

        Yeah that was a last minute change, I actually tested with "regions.size() > lastNumberOfRegions" and then thought that that number was going down, I was confused with regionsToAssign().

        Also, remaining is calculated lastly. I don't know why remaining is updated in the if block.

        Derp sorry it should be the timeout that's incremented.

        Show
        Jean-Daniel Cryans added a comment - I think regions.size() should increase in the above loop. So I don't understand the condition for if above. Yeah that was a last minute change, I actually tested with "regions.size() > lastNumberOfRegions" and then thought that that number was going down, I was confused with regionsToAssign(). Also, remaining is calculated lastly. I don't know why remaining is updated in the if block. Derp sorry it should be the timeout that's incremented.
        Hide
        Jean-Daniel Cryans added a comment -

        Patch fixed.

        Show
        Jean-Daniel Cryans added a comment - Patch fixed.
        Hide
        Ted Yu added a comment -

        +1 on patch version 2.

        Show
        Ted Yu added a comment - +1 on patch version 2.
        Hide
        stack added a comment -

        regions.isEmpty is cheaper than + if (regions.size() == 0) {

        You get size of region list to do logging... a few lines later. Might as well cache it?

        You can address on commit.

        +1 on patch.

        Show
        stack added a comment - regions.isEmpty is cheaper than + if (regions.size() == 0) { You get size of region list to do logging... a few lines later. Might as well cache it? You can address on commit. +1 on patch.
        Hide
        Jean-Daniel Cryans added a comment -

        Patch for trunk, contains fixes for what Stack wrote.

        Show
        Jean-Daniel Cryans added a comment - Patch for trunk, contains fixes for what Stack wrote.
        Hide
        Jean-Daniel Cryans added a comment -

        Committed to branch and trunk.

        Show
        Jean-Daniel Cryans added a comment - Committed to branch and trunk.
        Hide
        Hudson added a comment -

        Integrated in HBase-TRUNK #2217 (See https://builds.apache.org/job/HBase-TRUNK/2217/)
        HBASE-4395 EnableTableHandler races with itself

        jdcryans :
        Files :

        • /hbase/trunk/CHANGES.txt
        • /hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/handler/EnableTableHandler.java
        Show
        Hudson added a comment - Integrated in HBase-TRUNK #2217 (See https://builds.apache.org/job/HBase-TRUNK/2217/ ) HBASE-4395 EnableTableHandler races with itself jdcryans : Files : /hbase/trunk/CHANGES.txt /hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/handler/EnableTableHandler.java

          People

          • Assignee:
            Jean-Daniel Cryans
            Reporter:
            Jean-Daniel Cryans
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development