Briefly, to reproduce:
- Run JT with CapacityTaskScheduler [Say, Cluster max map = 8G, Cluster map = 2G]
- Run two TTs but with varied capacity, say, one with 4 map slot, another with 3 map slots.
- Run a job with two tasks, each demanding mem worth 4 slots at least (Map mem = 7G or so).
- Job will begin running on TT #1, but will also end up reserving the 3 slots on TT #2 cause it does not check for the maximum limit of slots when reserving (as it goes greedy, and hopes to gain more slots in future).
- Other jobs that could've run on the TT #2 over 3 slots are thereby blocked out due to this illogical reservation.
I've not yet tested MR2 for this so feel free to weigh in if it affects MR2 as well.
For MR1, I've attached a test case initially to indicate this. A fix that checks reservations vs. max slots, to follow.