Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-33031

scheduler with blacklisting doesn't appear to pick up new executor added

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Critical
    • Resolution: Unresolved
    • 3.0.0, 3.1.0
    • None
    • Scheduler
    • None

    Description

      I was running a test with blacklisting  standalone mode and all the executors were initially blacklisted.  Then one of the executors died and we got allocated another one. The scheduler did not appear to pick up the new one and try to schedule on it though.

      You can reproduce this by starting a master and slave on a single node, then launch a shell like where you will get multiple executors (in this case I got 3)

      $SPARK_HOME/bin/spark-shell --master spark://yourhost:7077 --executor-cores 4 --conf spark.blacklist.enabled=true

      From shell run:

      import org.apache.spark.TaskContext
      val rdd = sc.makeRDD(1 to 1000, 5).mapPartitions { it =>
       val context = TaskContext.get()
       if (context.attemptNumber() < 2) {
       throw new Exception("test attempt num")
       }
       it
      }
      rdd.collect()

       

      Note that I tried both with and without dynamic allocation enabled.

       

      You can see screen shot related on https://issues.apache.org/jira/browse/SPARK-33029

      Attachments

        Activity

          People

            Unassigned Unassigned
            tgraves Thomas Graves
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated: