Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-23053

taskBinarySerialization and task partitions calculate in DagScheduler.submitMissingTasks should keep the same RDD checkpoint status

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.1.0
    • Fix Version/s: 2.1.3, 2.2.2, 2.3.0
    • Component/s: Scheduler, Spark Core
    • Labels:
      None

      Description

      When we run concurrent jobs using the same rdd which is marked to do checkpoint. If one job has finished running the job, and start the process of RDD.doCheckpoint, while another job is submitted, then submitStage and submitMissingTasks will be called. In submitMissingTasks, will serialize taskBinaryBytes and calculate task partitions which are both affected by the status of checkpoint, if the former is calculated before doCheckpoint finished, while the latter is calculated after doCheckpoint finished, when run task, rdd.compute will be called, for some rdds with particular partition type such as MapWithStateRDD who will do partition type cast, will get a ClassCastException because the part params is actually a CheckpointRDDPartition.
      This error occurs because rdd.doCheckpoint occurs in the same thread that called sc.runJob, while the task serialization occurs in the DAGSchedulers event loop.

        Attachments

          Activity

            People

            • Assignee:
              ivoson huangtengfei
              Reporter:
              ivoson huangtengfei
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: