In Flink 1.13 (and older versions), TaskManager failures stall the processing for a significant amount of time, even though the system gets indications for the failure almost immediately through network connection losses.
This is due to a high (default) heartbeat timeout of 50 seconds  to accommodate for GC pauses, transient network disruptions or generally slow environments (otherwise, we would unregister a healthy TaskManager).
Such a high timeout can lead to disruptions in the processing (no processing for certain periods, high latencies, buildup of consumer lag etc.). In Reactive Mode (
FLINK-10407), the issue surfaces on scale-down events, where the loss of a TaskManager is immediately visible in the logs, but the job is stuck in "FAILING" for quite a while until the TaskManger is really deregistered. (Note that this issue is not that critical in a autoscaling setup, because Flink can control the scale-down events and trigger them proactively)
On the attached metrics dashboard, one can see that the job has significant throughput drops / consumer lags during scale down (and also CPU usage spikes on processing the queued events, leading to incorrect scale up events again).
One idea to solve this problem is to:
- Score TaskManagers based on certain signals (# exceptions reported, exception types (connection losses, akka failures), failure frequencies, ...) and blacklist them accordingly.
- Introduce a best-effort TaskManager unregistration mechanism: When a TaskManager receives a sigterm, it sends a final message to the JobManager saying "goodbye", and the JobManager can immediately remove the TM from its bookkeeping.