Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-32898

totalExecutorRunTimeMs is too big

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotVotersStop watchingWatchersCreate sub-taskConvert to sub-taskLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.4.7, 3.0.1
    • Fix Version/s: 2.4.8, 3.0.2, 3.1.0
    • Component/s: Spark Core
    • Labels:
      None

      Description

      This might be because of incorrectly calculating executorRunTimeMs in Executor.scala
      The function collectAccumulatorsAndResetStatusOnFailure(taskStartTimeNs) can be called when taskStartTimeNs is not set yet (it is 0).

      As of now in master branch, here is the problematic code: 

      https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/Executor.scala#L470

       

      There is a throw exception before this line. The catch branch still updates the metric.
      However the query shows as SUCCESSful. Maybe this task is speculative. Not sure.

       

      submissionTime in LiveExecutionData may also have similar problem.

      https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala#L449

       

        Attachments

          Activity

          $i18n.getText('security.level.explanation', $currentSelection) Viewable by All Users
          Cancel

            People

              Dates

              • Created:
                Updated:
                Resolved:

                Issue deployment