Recently, in our internal use case for native K8s integration with K8s HA enabled, we found that the leader related ConfigMaps could be residual in some corner situations.
After some investigations, I think it is possibly caused by the inappropriate shutdown process.
In ClusterEntrypoint#shutDownAsync, we first call the closeClusterComponent, which also includes deregistering the Flink application from cluster management(e.g. Yarn, K8s). Then we call the stopClusterServices and cleanupDirectories. Imagine that the cluster management do the deregister very fast, the JobManager process receives SIGNAL 15 before or is being executing the stopClusterServices and cleanupDirectories. The jvm process will directly exit then. So the two methods may not be executed.