Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Duplicate
-
2.0.0, 2.0.2, 2.1.0, 2.1.1, 2.2.0
-
None
-
Spark 2.1
Hadoop 2.6
Description
When ListenerBus event queue runs full, spark dynamic allocation stops working - Spark fails to shrink number of executors when there are no active jobs (Spark driver "thinks" there are active jobs since it didn't capture when they finished) .
ps. What's worse it also makes Spark flood YARN RM with reservation requests, so YARN preemption doesn't function properly too (we're on Spark 2.1 / Hadoop 2.6).
Attachments
Issue Links
- duplicates
-
SPARK-18838 High latency of event processing for large jobs
-
- Resolved
-
- is related to
-
SPARK-18838 High latency of event processing for large jobs
-
- Resolved
-
- relates to
-
SPARK-15703 Make ListenerBus event queue size configurable
-
- Resolved
-