Looks like KubernetesLeaderRetrievalDriver is not closed even if the KubernetesLeaderElectionDriver is closed and job reach globally terminated.
This will lead to many configmap watching be still active with connections to K8s.
When the connections exceeds max concurrent requests, those new configmap watching can not be started. Finally leads to all new jobs submitted timeout.
Yang Wang Till Rohrmann This may be related to FLINK-20695, could you confirm this issue?
But when many jobs are running in same session cluster, the config map watching is required to be active. Maybe we should merge all config maps watching?