Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-21991

[LAUNCHER] LauncherServer acceptConnections thread sometime dies if machine has very high load



    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.0.2, 2.1.0, 2.1.1, 2.2.0
    • 2.0.3, 2.1.3, 2.2.1, 2.3.0
    • Spark Submit
    • None
    • Single node machine running Ubuntu 16.04.2 LTS (4.4.0-79-generic)
      YARN 2.7.2
      Spark 2.0.2


      The way the LauncherServer acceptConnections thread schedules client timeouts causes (non-deterministically) the thread to die with the following exception if the machine is under very high load:

      Exception in thread "LauncherServer-1" java.lang.IllegalStateException: Task already scheduled or cancelled
              at java.util.Timer.sched(Timer.java:401)
              at java.util.Timer.schedule(Timer.java:193)
              at org.apache.spark.launcher.LauncherServer.acceptConnections(LauncherServer.java:249)
              at org.apache.spark.launcher.LauncherServer.access$000(LauncherServer.java:80)
              at org.apache.spark.launcher.LauncherServer$1.run(LauncherServer.java:143)

      The issue is related to the ordering of actions that the acceptConnections thread uses to handle a client connection:

      1. create timeout action
      2. create client thread
      3. start client thread
      4. schedule timeout action

      Under normal conditions the scheduling of the timeout action happen before the client thread has a chance to start, however if the machine is under very high load the client thread can receive CPU time before the timeout action gets scheduled.

      If this condition happen, the client thread cancel the timeout action (which is not yet been scheduled) and goes on, but as soon as the acceptConnections thread gets the CPU back, it will try to schedule the timeout action (which has already been canceled) thus raising the exception.

      Changing the order in which the client thread gets started and the timeout gets scheduled seems to be sufficient to fix this issue.

      As stated above the issue is non-deterministic, I faced the issue multiple times on a single-node machine submitting a high number of short jobs sequentially, but I couldn't easily create a test reproducing the issue.




            nivox Andrea Zito
            nivox Andrea Zito
            Marcelo Masiero Vanzin Marcelo Masiero Vanzin
            1 Vote for this issue
            4 Start watching this issue