Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
2.8.0
-
None
-
None
Description
We found a case that the queue is using 301 vcores while the max is set to 300. The same for the memory usage. The documentation states (see hadoop 2.7.1 FairScheduler documentation on apache):
-
A queue will never be assigned a container that would put its aggregate usage over this limit.
-
This is clearly not correct in the documentation or the behaviour.
Attachments
Issue Links
- is duplicated by
-
YARN-3126 FairScheduler: queue's usedResource is always more than the maxResource limit
- Resolved