Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-19649

Spark YARN client throws exception if job succeeds and max-completed-applications=0

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Won't Fix
    • 1.6.3
    • None
    • Spark Core, YARN
    • None
    • EMR release label 4.8.x

    Description

      I believe the patch in SPARK-3877 created a new race condition between YARN and the Spark client.

      I typically configure YARN not to keep any recent jobs in memory, as some of my jobs get pretty large.

      yarn-site	yarn.resourcemanager.max-completed-applications	0
      

      The once-per-second call to getApplicationReport may thus encounter a RUNNING application followed by a not found application, and report a false negative.

      (typical) Executor log:

      17/01/09 19:31:23 INFO ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
      17/01/09 19:31:23 INFO SparkContext: Invoking stop() from shutdown hook
      17/01/09 19:31:24 INFO SparkUI: Stopped Spark web UI at http://10.0.0.168:37046
      17/01/09 19:31:24 INFO YarnClusterSchedulerBackend: Shutting down all executors
      17/01/09 19:31:24 INFO YarnClusterSchedulerBackend: Asking each executor to shut down
      17/01/09 19:31:24 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
      17/01/09 19:31:24 INFO MemoryStore: MemoryStore cleared
      17/01/09 19:31:24 INFO BlockManager: BlockManager stopped
      17/01/09 19:31:24 INFO BlockManagerMaster: BlockManagerMaster stopped
      17/01/09 19:31:24 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
      17/01/09 19:31:24 INFO SparkContext: Successfully stopped SparkContext
      17/01/09 19:31:24 INFO ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
      17/01/09 19:31:24 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
      17/01/09 19:31:24 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
      17/01/09 19:31:24 INFO AMRMClientImpl: Waiting for application to be successfully unregistered.
      17/01/09 19:31:24 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
      

      Client log:

      17/01/09 19:31:23 INFO Client: Application report for application_1483983939941_0056 (state: RUNNING)
      17/01/09 19:31:24 ERROR Client: Application application_1483983939941_0056 not found.
      Exception in thread "main" org.apache.spark.SparkException: Application application_1483983939941_0056 is killed
      	at org.apache.spark.deploy.yarn.Client.run(Client.scala:1038)
      	at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
      	at org.apache.spark.deploy.yarn.Client.main(Client.scala)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:606)
      	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
      	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
      	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
      	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
      	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
      

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              j_caplan Joshua Caplan
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: