Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-27094

Thread interrupt being swallowed while launching executors in YarnAllocator

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.4.0
    • 2.4.1, 3.0.0
    • Spark Core, YARN
    • None

    Description

      When shutting down a SparkContext, the YarnAllocator thread is interrupted. If the interrupt happens just at the wrong time, you'll see something like this:

      19/03/05 07:04:20 WARN ScriptBasedMapping: Exception running blah
      java.io.IOException: java.lang.InterruptedException
      	at org.apache.hadoop.util.Shell.runCommand(Shell.java:578)
      	at org.apache.hadoop.util.Shell.run(Shell.java:478)
      	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:766)
      	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:251)
      	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:188)
      	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
      	at org.apache.hadoop.yarn.util.RackResolver.coreResolve(RackResolver.java:101)
      	at org.apache.hadoop.yarn.util.RackResolver.resolve(RackResolver.java:81)
      	at org.apache.spark.deploy.yarn.SparkRackResolver.resolve(SparkRackResolver.scala:37)
      	at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$handleAllocatedContainers$2.apply(YarnAllocator.scala:431)
      	at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$handleAllocatedContainers$2.apply(YarnAllocator.scala:430)
      	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
      	at org.apache.spark.deploy.yarn.YarnAllocator.handleAllocatedContainers(YarnAllocator.scala:430)
      	at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:281)
      	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:556)
      

      That means the YARN code being called (RackResolver) is swallowing the interrupt , so the Spark allocator thread never exits. In this particular app, the allocator was in the middle of allocating a very large number of executors, so it seemed like the application was hung, and there were a lot of executor coming up even though the context was being shut down.

      Attachments

        Activity

          People

            vanzin Marcelo Masiero Vanzin
            vanzin Marcelo Masiero Vanzin
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: