Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
1.6.1
-
None
Description
I have a job which after a while in one of its stages grinds to a halt, from processing around 300k tasks in 15 minutes to less than 1000 in the next hour. The driver ends up using 100% CPU on a single core (out of 4) and the executors start failing to receive heartbeat responses, tasks are not scheduled and results trickle in.
For this stage the max scheduler delay is 15 minutes, and the 75% percentile is 4ms.
It appears that TaskScheulderImpl does most of its work whilst holding the global synchronised lock for the class, this synchronised lock is shared between at least,
TaskSetManager.canFetchMoreResults
TaskSchedulerImpl.handleSuccessfulTask
TaskSchedulerImpl.executorHeartbeatReceived
TaskSchedulerImpl.statusUpdate
TaskSchedulerImpl.checkSpeculatableTasks
This looks to severely limit the latency and throughput of the scheduler, and casuses my job to straight up fail due to taking too long.
Attachments
Attachments
Issue Links
- relates to
-
SPARK-13279 Scheduler does O(N^2) operation when adding a new task set (making it prohibitively slow for scheduling 200K tasks)
- Resolved