After upgrade to 1.15.1 we started getting error while running JOB
org.apache.flink.runtime.io.network.netty.exception.LocalTransportException: Sending the partition request to '/XXX.XXX.XX.32:6121 (#0)' failed. at org.apache.flink.runtime.io.network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:145) ....
Caused by: org.apache.flink.shaded.netty4.io.netty.channel.StacklessClosedChannelException atrg.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.write(Object, ChannelPromise)(Unknown Source)
After investigation we managed narrow it down to the exact behavior then this issue happens:
- Deploying JOB on fresh kubernetes session cluster with multiple TaskManagers: TM1 and TM2 is successful. Job has multiple partitions running on both TM1 and TM2.
- One TaskManager TM2 (XXX.XXX.XX.32) fails for unrelated issue. For example OOM exception.
- Kubernetes POD with mentioned TaskManager TM2 is restarted. POD retains same IP address as before.
- JobManager is able to pickup the restarted TM2 (XXX.XXX.XX.32)
- JOB is restarted because it was running on the failed TaskManager TM2
- TM1 data channel to TM2 is closed and we get LocalTransportException: Sending the partition request to '/XXX.XXX.XX.32:6121 (#0)' failed during JOB running stage.
- When we explicitly delete pod with TM2 it creates new POD with different IP address and JOB is able to start again.
Important to note that we didn't encountered this issue with previous 1.14.4 version and TaskManager restarts didn't cause such error.
Please note attached kubernetes deployments and reduced logs from JobManager. TaskManager logs did show errors before error, but doesn't show anything significant after restart.
Setting taskmanager.network.max-num-tcp-connections to a very high number workarounds the problem
- is caused by
FLINK-15455 Enable TCP connection reuse across multiple jobs.
- links to