Status: Patch Available
Affects Version/s: None
Fix Version/s: None
Component/s: capacity scheduler
Recently we've been investigating a scenario where applications submitted to a lower priority queue could not get scheduled because a higher priority queue in the same hierarchy could now satisfy the allocation request. Both queue belonged to the same partition.
If we disabled node labels, the problem disappeared.
The problem is that RegularContainerAllocator always allocated a container for the request, even if it should not have.
- Cluster total resources: 3 nodes, 15GB, 24 vcores (5GB / 8 vcore per node)
- Partition "shared" was created with 2 nodes
- "root.lowprio" (priority = 20) and "root.highprio" (priorty = 40) were added to the partition
- Both queues have a limit of <memory:5120, vCores:8>
- Using DominantResourceCalculator
Submit distributed shell application to highprio with switches "-num_containers 3 -container_vcores 4". The memory allocation is 512MB per container.
Chain of events:
1. Queue is filled with contaners until it reaches usage <memory:2560, vCores:5>
2. A node update event is pushed to CS from a node which is part of the partition
2. AbstractCSQueue.canAssignToQueue() returns true because it's smaller than the current limit resource <memory:5120, vCores:8>
3. Then LeafQueue.assignContainers() runs successfully and gets an allocated container for <memory:512, vcores:4>
4. But we can't commit the resource request because we would have 9 vcores in total, violating the limit.
The problem is that we always try to assign container for the same application in each heartbeat from "highprio". Applications in "lowprio" cannot make progress.
RegularContainerAllocator.assignContainer() does not handle this case well. We only reject allocation if this condition is satisfied:
But if we have node labels, we enter a different code path and succeed with the allocation if there's room for a container.