Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.21.1, 0.22.0
-
None
-
Reviewed
Description
Scenarios:
You have a cluster with 600 map slots and 3 pools. Fairshare for each pool is 200 to start with. Fairsharepreemption timeout is 5 mins.
1) Pool1 schedules 300 map tasks first
2) Pool2 then schedules another 300 map tasks
3) Pool3 demands 300 map tasks but doesn't get any slot as all slots are taken.
4) After 5 mins pool3 should preempt 200 map-slots. Instead of peempting 100 slots each from pool1 and pool2, the bug would cause it to preempt all 200 slots from pool2 (last started) causing it to go below fairshare. This is happening because the preemptTask method is not reducing the tasks left from a pool while preempting the tasks.
The above scenario could be an extreme case but some amount of excess preemption would happen because of this bug.
The patch I created was for 0.22.0 but the code fix should work on 0.21 as well as looks like it has the same bug.