As the discussion in the PR, if the watchReconnectLimit is configured by users via java properties or environment, the watch may be stopped and all the changes will not be processed properly. So we need to throw a fatal exception in KubernetesResourceManager when the old one is closed with exception.
> Why do we not create a new watcher in KubernetesResourceManager when old one closed exceptionally？
After checking the WatchConnectionManager implementation in fabric8 kubernetes client, if the web socket closed exceptionally, it will check the reconnectLimit and schedule a reconnect if needed. And when reconnect successfully, the currentReconnectAttempt will reset to 0. By default, it will retry forever. When the users explicitly specify the reconnectLimit, we should respect it.
Another reason is the the web socket closed exceptionally is usually because of network problems or port abuse. In such situation, it is better to fail the jobmanager pod and retry in a new one.