Affects Version/s: 0.23.3
Fix Version/s: None
Currently the nodemanager doesn't cleanup running containers when it gets restarted. This can cause containers to get lost and stick around forever. We've seen this happen multiple times when the RM is restarted. When the RM is brought back up, it doesn't know about what was running on the cluster, it tells the NMs to reboot and when the NM reboots it loses what it had running. If there are any containers that are behaving badly there is no one left that knows about them to kill them.
We should kill any running containers when the nodemanager is being started. Note that when the NM is being brought up it needs to somehow figure out what containers were running and be sure it doesn't kill anything it shouldn't.
Note, we should also try to kill any running containers when the node manager is shutting down (jira 4213 was filed for this).
This might change a bit when RM restart is implemented if tasks can actually survive across RM/NM being rebooted, but that can be addressed at that point.
|Field||Original Value||New Value|
|Project||Hadoop Map/Reduce [ 12310941 ]||Hadoop YARN [ 12313722 ]|
|Affects Version/s||0.23.3 [ 12322841 ]|
|Affects Version/s||0.23.3 [ 12320060 ]|
|Target Version/s||0.23.3 [ 12320060 ]|
|Component/s||nodemanager [ 12319323 ]|
|Component/s||mrv2 [ 12314301 ]|
|Component/s||nodemanager [ 12315341 ]|
|Status||Open [ 1 ]||Resolved [ 5 ]|
|Resolution||Duplicate [ 3 ]|