Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-9292

Syncer fails but we won't go down

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Cannot Reproduce
    • 0.95.2
    • 0.99.0
    • wal
    • None
    • hadoop-2.1.0-beta and tip of 0.95 branch

    Description

      Running some simple loading tests i ran into the following running on hadoop-2.1.0-beta.

      2013-08-20 16:51:56,310 DEBUG [regionserver60020.logRoller] regionserver.LogRoller: HLog roll requested
      2013-08-20 16:51:56,314 DEBUG [regionserver60020.logRoller] wal.FSHLog: cleanupCurrentWriter  waiting for transactions to get synced  total 655761 synced till here 655750
      2013-08-20 16:51:56,360 INFO  [regionserver60020.logRoller] wal.FSHLog: Rolled WAL /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042714402 with entries=985, filesize=122.5 M; new WAL /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716311
      2013-08-20 16:51:56,378 WARN  [Thread-4788] hdfs.DFSClient: DataStreamer Exception
      org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716311 could only be replicated to 0 nodes instead of minReplication (=1).  There are 5 datanode(s) running and no node(s) are excluded in this operation.
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
              at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
              at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
              at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
              at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
              at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:396)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
              at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
      
              at org.apache.hadoop.ipc.Client.call(Client.java:1347)
              at org.apache.hadoop.ipc.Client.call(Client.java:1300)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
              at $Proxy13.addBlock(Unknown Source)
              at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:597)
              at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
              at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
              at $Proxy13.addBlock(Unknown Source)
              at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
              at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:597)
              at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
              at $Proxy14.addBlock(Unknown Source)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073)
      ...
      

      Thereafter the server is up but useless and can't go down because it just keeps doing this:

      2013-08-20 16:51:56,380 FATAL [RpcServer.handler=3,port=60020] wal.FSHLog: Could not sync. Requesting roll of hlog
      org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716311 could only be replicated to 0 nodes instead of minReplication (=1).  There are 5 datanode(s) running and no node(s) are excluded in this operation.
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
              at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
              at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
              at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
              at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
              at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:396)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
              at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
      
              at org.apache.hadoop.ipc.Client.call(Client.java:1347)
              at org.apache.hadoop.ipc.Client.call(Client.java:1300)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
              at $Proxy13.addBlock(Unknown Source)
              at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:597)
              at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
              at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
              at $Proxy13.addBlock(Unknown Source)
              at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
              at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:597)
              at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
              at $Proxy14.addBlock(Unknown Source)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:509)
      ...
      

      It goes on like this for ever.

      here is a bit more log:

      2013-08-21 04:30:07,932 ERROR [regionserver60020.logSyncer] wal.FSHLog: Error while syncing, requesting close of hlog
      org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716482 could only be replicated to 0 nodes instead of minReplication (=1).  There are 5 datanode(s) running and no node(s) are excluded in this operation.
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
              at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
              at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
              at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
              at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
              at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:396)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
              at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
      
              at org.apache.hadoop.ipc.Client.call(Client.java:1347)
              at org.apache.hadoop.ipc.Client.call(Client.java:1300)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
              at $Proxy13.addBlock(Unknown Source)
              at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:597)
              at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
              at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
              at $Proxy13.addBlock(Unknown Source)
              at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
              at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:597)
              at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
              at $Proxy14.addBlock(Unknown Source)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:509)
                                                                                                                                                                                                2993503,2-9   Bot
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
              at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:396)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
              at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
      
              at org.apache.hadoop.ipc.Client.call(Client.java:1347)
              at org.apache.hadoop.ipc.Client.call(Client.java:1300)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
              at $Proxy13.addBlock(Unknown Source)
              at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:597)
              at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
              at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
              at $Proxy13.addBlock(Unknown Source)
              at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
              at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:597)
              at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
              at $Proxy14.addBlock(Unknown Source)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073)
              at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:509)
      2013-08-21 04:30:07,932 FATAL [regionserver60020.logSyncer] wal.FSHLog: Could not sync. Requesting roll of hlog
      org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716482 could only be replicated to 0 nodes instead of minReplication (=1).  There are 5 datanode(s) running and no node(s) are excluded in this operation.
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
              at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
              at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
      

      We broke something in here (hadoop-2.0.1-beta going bad of a sudden is interesting tooo)

      Attachments

        Activity

          People

            Unassigned Unassigned
            stack Michael Stack
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: