Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-7110

NodeManager always crash for spark shuffle service out of memory

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Duplicate
    • None
    • None
    • nodemanager
    • None

    Description

      NM often crash due to the Spark shuffle service, I can saw many error log messages before NM crashed:

      2017-08-28 16:14:20,521 ERROR org.apache.spark.network.server.TransportRequestHandler: Error sending result ChunkFetchSuccess{streamChunkId=StreamChunkId{streamId=791888824460, chunkIndex=0}, buffer=FileSegmentManagedBuffer{file=/data11/hadoopdata/nodemanager/local/usercache/map_loc/appcache/application_1502793246072_2171283/blockmgr-11e2d625-8db1-477c-9365-4f6d0a7d1c48/10/shuffle_0_6_0.data, offset=27063401500, length=64785602}} to /10.93.91.17:18958; closing connection
      java.io.IOException: Broken pipe
              at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
              at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
              at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
              at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
              at org.apache.spark.network.buffer.LazyFileRegion.transferTo(LazyFileRegion.java:96)
              at org.apache.spark.network.protocol.MessageWithHeader.transferTo(MessageWithHeader.java:92)
              at io.netty.channel.socket.nio.NioSocketChannel.doWriteFileRegion(NioSocketChannel.java:254)
              at io.netty.channel.nio.AbstractNioByteChannel.doWrite(AbstractNioByteChannel.java:237)
              at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:281)
              at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:761)
              at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.forceFlush(AbstractNioChannel.java:317)
              at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:519)
              at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
              at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
              at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
              at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
              at java.lang.Thread.run(Thread.java:745)
      2017-08-28 16:14:20,523 ERROR org.apache.spark.network.server.TransportRequestHandler: Error sending result RpcResponse{requestId=7652091066050104512, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=13 cap=13]}} to /10.93.91.17:18958; closing connection
      

      Finally, there are too many Finalizer objects in the process of NM to cause OOM.

      Attachments

        1. screenshot-1.png
          335 kB
          YunFan Zhou

        Issue Links

          Activity

            People

              Unassigned Unassigned
              daemon YunFan Zhou
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: