We found a case that the queue is using 301 vcores while the max is set to 300. The same for the memory usage. The documentation states (see hadoop 2.7.1 FairScheduler documentation on apache):
A queue will never be assigned a container that would put its aggregate usage over this limit.
This is clearly not correct in the documentation or the behaviour.