Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-10322

DomainSocket error lead to more and more DataNode thread waiting

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 2.5.0
    • 2.6.4
    • datanode
    • None

    Description

      When open short read and a DomianSoket broken pipe error happened,The Datanode will produce more and more waiting threads.
      It is similar to Bug HADOOP-11802, but i do not think they are same problem, because the DomainSocket thread is in Running state.

      ============stack log:

      "DataXceiver for client unix:/var/run/hadoop-hdfs/dn.50010 Waiting for operation #1" daemon prio=10 tid=0x000000000278e000 nid=0x2bc6 waiting on condition [0x00007f2d6e4a5000]
      java.lang.Thread.State: WAITING (parking)
      at sun.misc.Unsafe.park(Native Method)

      • parking to wait for <0x000000061c493500> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:316)
        at org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:394)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:178)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:93)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
        at java.lang.Thread.run(Thread.java:745)

      =============DomianSocketWatcher

      "Thread-759187" daemon prio=10 tid=0x000000000219c800 nid=0x8c56 runnable [0x00007f2dbe4cb000]
      java.lang.Thread.State: RUNNABLE
      at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
      at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
      at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:474)
      at java.lang.Thread.run(Thread.java:745)

      ===============datanode error log

      ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: datanode-xxxx:50010:DataXceiver error processing REQUEST_SHORT_CIRCUIT_SHM operation src: unix:/var/run/hadoop-hdfs/dn.50010 dst: <local>
      java.net.SocketException: write(2) error: Broken pipe
      at org.apache.hadoop.net.unix.DomainSocket.writeArray0(Native Method)
      at org.apache.hadoop.net.unix.DomainSocket.access$300(DomainSocket.java:45)
      at org.apache.hadoop.net.unix.DomainSocket$DomainOutputStream.write(DomainSocket.java:601)
      at com.google.protobuf.CodedOutputStream.refreshBuffer(CodedOutputStream.java:833)
      at com.google.protobuf.CodedOutputStream.flush(CodedOutputStream.java:843)
      at com.google.protobuf.AbstractMessageLite.writeDelimitedTo(AbstractMessageLite.java:91)
      at org.apache.hadoop.hdfs.server.datanode.DataXceiver.sendShmSuccessResponse(DataXceiver.java:371)
      at org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:409)
      at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:178)
      at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:93)
      at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
      at java.lang.Thread.run(Thread.java:745)

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              chenfolin ChenFolin
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: