Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-16349

TestClusterId may hang during cluster shutdown

    XMLWordPrintableJSON

    Details

    • Type: Test
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.4.0, 2.0.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      I was running TestClusterId on branch-1 where I observed the test hang during test tearDown().

      2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] regionserver.HRegion(1415): Closing hbase:meta,,1.1588230740: disabling compactions & flushes
      2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] regionserver.HRegion(1442): Updates disabled for region hbase:meta,,1.1588230740
      2016-08-03 11:36:39,600 INFO  [RS_CLOSE_META-cn012:49371-0] regionserver.HRegion(2253): Flushing 1/1 column families, memstore=232 B
      2016-08-03 11:36:39,601 WARN  [RS_OPEN_META-cn012:49371-0.append-pool17-t1] wal.FSHLog$RingBufferEventHandler(1900): Append sequenceId=8, requesting roll of WAL
      java.io.IOException: All datanodes DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK] are bad. Aborting...
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
      2016-08-03 11:36:39,602 FATAL [RS_CLOSE_META-cn012:49371-0] regionserver.HRegionServer(2085): ABORTING region server cn012.l42scl.hortonworks.com,49371,1470249187586: Unrecoverable       exception while closing region hbase:meta,,1.1588230740, still finishing close
      org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1902)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1754)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1676)
        at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
      Caused by: java.io.IOException: All datanodes DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK] are bad. Aborting...
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
      2016-08-03 11:36:39,603 FATAL [RS_CLOSE_META-cn012:49371-0] regionserver.HRegionServer(2093): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.           MultiRowMutationEndpoint]
      

      This led to rst.join() hanging:

      "RS:0;cn012:49371" #648 prio=5 os_prio=0 tid=0x00007fdab24b5000 nid=0x621a waiting on condition [0x00007fd785fe0000]
         java.lang.Thread.State: TIMED_WAITING (sleeping)
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1326)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.waitOnAllRegionsToClose(HRegionServer.java:1312)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1082)
      

        Attachments

        1. 16349.v1.txt
          0.6 kB
          Ted Yu
        2. 16349.branch-1.v1.txt
          0.6 kB
          Ted Yu

          Activity

            People

            • Assignee:
              yuzhihong@gmail.com Ted Yu
              Reporter:
              yuzhihong@gmail.com Ted Yu
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: