It looks like the problem occurs because ephemeral ports are configured for the NodeManagers. NMs are identified by host:port pairs, and when ephemeral ports are used we lose the ability to differentiate between a new node joining the cluster and a lost node rejoining the cluster.
In the screenshot's scenario, the ResourceManager believes that 4 nodes are in the cluster and only after the NM timeout interval (default 10min) will it realize 3 of the 4 nodes aren't there. This is not much different than a case of a cluster that has 4 separate NM machines and three of the NMs go down at the same time. The cluster capacity will be false within the timeout interval because the lost cluster capacity will not have been realized by the RM.
If ephemeral ports are not used then this problem cannot occur today because
MAPREDUCE-3070 did not really fix the quick NM reboot scenario. The NM reboot scenario only "works" with ephemeral ports because the RM sees it as a new NM joining the cluster (and a subsequent loss of an NM after the NM timeout) rather than a reboot of an existing NM. If a cluster is configured without ephemeral ports then a restarting NM cannot rejoin the cluster until after the NM timeout interval has passed on the RM, and by then the node's resources will have been removed from the cluster before being added back in when it rejoins.
Ideally we should put in a real fix for
MAPREDUCE-3070 so the RM can realize an existing NM trying to join the cluster is a reboot scenario instead of rejecting the new NM instance. Of course, the RM would have to kill off all the existing containers for the NM when it rejoins.
The issue of detecting the difference between a new NM joining and an existing NM rejoining when ephemeral ports are configured is being tracked in