Description
While testing the queue capacity limits, it appears that the job can exceed the
queue capacity limit by one task while the user limit factor is 1. It's not
clear to me why this is.
Here is the steps to reproduce:
1) set yarn.app.mapreduce.am.resource.mb to 2048 (default value)
2) set yarn.scheduler.capacity.root.default.user-limit-factor to 1.0 (default)
3) set yarn.scheduler.capacity.root.default.capacity to 90 (%)
4) For a cluster with capacity of 56G, 90% rounded up is 51.
5) submit a job with large number of tasks, each task using 1G memory.
6) webui shows that the used resource is 52 G, which is 92.9% of the cluster
capacity (instead of the expected 90%), and 103.2% of the queue capacity
(instead of the expected 100%).