Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-32898

totalExecutorRunTimeMs is too big

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.4.7, 3.0.1
    • 2.4.8, 3.0.2, 3.1.0
    • Spark Core
    • None

    Description

      This might be because of incorrectly calculating executorRunTimeMs in Executor.scala
      The function collectAccumulatorsAndResetStatusOnFailure(taskStartTimeNs) can be called when taskStartTimeNs is not set yet (it is 0).

      As of now in master branch, here is the problematic code: 

      https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/Executor.scala#L470

       

      There is a throw exception before this line. The catch branch still updates the metric.
      However the query shows as SUCCESSful. Maybe this task is speculative. Not sure.

       

      submissionTime in LiveExecutionData may also have similar problem.

      https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala#L449

       

      Attachments

        Activity

          People

            Ngone51 wuyi
            linhongliu-db Linhong Liu
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: