Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-21259

[amv2] Revived deadservers; recreated serverstatenode



    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • 2.1.0
    • 2.1.1, 2.0.3
    • amv2
    • None
    • Reviewed


      On startup, I see servers being revived; i.e. their serverstatenode is getting marked online even though its just been processed by ServerCrashProcedure. It looks like this (in a patched server that reports on whenever a serverstatenode is created):

      2018-09-29 03:45:40,963 INFO org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=3982597, state=SUCCESS; ServerCrashProcedure server=vb1442.halxg.cloudera.com,22101,1536675314426, splitWal=true, meta=false in 1.0130sec
      2018-09-29 03:45:43,733 INFO org.apache.hadoop.hbase.master.assignment.RegionStates: CREATING! vb1442.halxg.cloudera.com,22101,1536675314426
      java.lang.RuntimeException: WHERE AM I?
              at org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:1116)
              at org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:1143)
              at org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1464)
              at org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:200)
              at org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:369)
              at org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:97)
              at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:953)
              at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1716)
              at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1494)
              at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:75)
              at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2022)

      See how we've just finished a SCP which will have removed the serverstatenode... but then we come across an unassign that references the server that was just processed. The unassign will attempt to update the serverstatenode and therein we create one if one not present. We shouldn't be creating one.

      I think I see this a lot because I am scheduling unassigns with hbck2. The servers crash and then come up with SCPs doing cleanup of old server and unassign procedures in the procedure executor queue to be processed still.... but could happen at any time on cluster should an unassign happen get scheduled near an SCP.


        1. HBASE-21259.branch-2.1.001.patch
          20 kB
          Michael Stack
        2. HBASE-21259.branch-2.1.002.patch
          26 kB
          Michael Stack
        3. HBASE-21259.branch-2.1.003.patch
          34 kB
          Michael Stack
        4. HBASE-21259.branch-2.1.004.patch
          37 kB
          Michael Stack
        5. HBASE-21259.branch-2.1.005.patch
          36 kB
          Michael Stack
        6. HBASE-21259.branch-2.1.006.patch
          34 kB
          Michael Stack

        Issue Links



              stack Michael Stack
              stack Michael Stack
              0 Vote for this issue
              4 Start watching this issue