Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.0.0
-
None
Description
I am running a spark shell on a 1 node standalone cluster. I noticed that the executors page ui was marking the driver as blacklisted for the stage that is running. Attached a screen shot.
Also, in my case one of the executors died and it doesn't seem like the schedule rpicked up the new one. It doesn't show up on the stages page and just shows it as active but none of the tasks ran there.
You can reproduce this by starting a master and slave on a single node, then launch a shell like where you will get multiple executors (in this case I got 3)
$SPARK_HOME/bin/spark-shell --master spark://yourhost:7077 --executor-cores 4 --conf spark.blacklist.enabled=true
From shell run:
import org.apache.spark.TaskContext val rdd = sc.makeRDD(1 to 1000, 5).mapPartitions { it => val context = TaskContext.get() if (context.attemptNumber() < 2) { throw new Exception("test attempt num") } it }