Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-11334

numRunningTasks can't be less than 0, or it will affect executor allocation

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 1.4.0
    • Fix Version/s: 2.3.0
    • Component/s: Spark Core
    • Labels:
      None
    • Target Version/s:

      Description

      With Dynamic Allocation function, a task failed over maxFailure time, all the dependent jobs, stages, tasks will be killed or aborted. In this process, SparkListenerTaskEnd event will be behind in SparkListenerStageCompleted and SparkListenerJobEnd. Like the Event Log below:

      {"Event":"SparkListenerStageCompleted","Stage Info":{"Stage ID":20,"Stage Attempt ID":0,"Stage Name":"run at AccessController.java:-2","Number of Tasks":200}
      {"Event":"SparkListenerJobEnd","Job ID":9,"Completion Time":1444914699829}
      {"Event":"SparkListenerTaskEnd","Stage ID":20,"Stage Attempt ID":0,"Task Type":"ResultTask","Task End Reason":{"Reason":"TaskKilled"},"Task Info":{"Task ID":1955,"Index":88,"Attempt":2,"Launch Time":1444914699763,"Executor ID":"5","Host":"linux-223","Locality":"PROCESS_LOCAL","Speculative":false,"Getting Result Time":0,"Finish Time":1444914699864,"Failed":true,"Accumulables":[]}}
      

      Because that, the numRunningTasks in ExecutorAllocationManager class will be less than 0, and it will affect executor allocation.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                sitalkedia@gmail.com Sital Kedia
                Reporter:
                meiyoula meiyoula
              • Votes:
                0 Vote for this issue
                Watchers:
                6 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: