Changes in NM capacity triggered from outside of the regular scheduling would unbalance existing distribution of allocations potentially triggering preemption. You'd need to handle this specially in the RM/scheduler to handle such scenarios.
The existing mechanism would/should work by simply killing off containers when necessary. The container fault tolerant mechanism would/should take care of the rest (including preemption). We can do a better job to differentiate the faults induced by preemption, which would be straight-forward if we expose a preemption API, when we get around to implement the preemption feature. If container suspend/resume API is implemented, we can do that as well.
It depends how you design you AM that handles unmanaged containers. You could request several small resources on peak and then release them as you don't need them.
This requires many missing features in RM in order to work properly: finer grain OS/application resource metrics, application priority, conflict arbitration, preemption and related security features (mostly related authorization stuff). This approach is also problematic to support coexistence of different instances/versions of YARN on the same physical cluster.
It is adding a new one, that is a change.
The change doesn't affect existing/future YARN applications. The management protocol allows existing/future cluster schedulers to expose appropriate resource views to (multiple instances/versions of) YARN in a straight forward manner.
IMO, the solution is orthogonal and to what you have proposed. It allows any existing non-YARN applications to efficiently coexist with YARN applications without having to write a special AM using "unmanaged resource" API, with no new features "required" in YARN now. In other words, it is a simple solution to allow YARN to coexist with other schedulers (including other instances/versions of YARN) that already have the features people use/want.
I'd be interested in hearing cases, where our approach "breaks" YARN applications in any way.