Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-28695

Fail to send partition request to restarted taskmanager

    XMLWordPrintableJSON

Details

    Description

      After upgrade to 1.15.1 we started getting error while running JOB

       

      org.apache.flink.runtime.io.network.netty.exception.LocalTransportException: Sending the partition request to '/XXX.XXX.XX.32:6121 (#0)' failed.    at org.apache.flink.runtime.io.network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:145)    .... 
      Caused by: org.apache.flink.shaded.netty4.io.netty.channel.StacklessClosedChannelException atrg.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.write(Object, ChannelPromise)(Unknown Source)

      After investigation we managed narrow it down to the exact behavior then this issue happens:

      1. Deploying JOB on fresh kubernetes session cluster with multiple TaskManagers: TM1 and TM2 is successful. Job has multiple partitions running on both TM1 and TM2.
      2. One TaskManager TM2 (XXX.XXX.XX.32) fails for unrelated issue. For example OOM exception.
      3. Kubernetes POD with mentioned TaskManager TM2 is restarted. POD retains same IP address as before.
      4. JobManager is able to pickup the restarted TM2 (XXX.XXX.XX.32)
      5. JOB is restarted because it was running on the failed TaskManager TM2
      6. TM1 data channel to TM2 is closed and we get LocalTransportException: Sending the partition request to '/XXX.XXX.XX.32:6121 (#0)' failed during JOB running stage.  
      7. When we explicitly delete pod with TM2 it creates new POD with different IP address and JOB is able to start again.

      Important to note that we didn't encountered this issue with previous 1.14.4 version and TaskManager restarts didn't cause such error.

      Please note attached kubernetes deployments and reduced logs from JobManager. TaskManager logs did show errors before error, but doesn't show anything significant after restart.

      EDIT:

      Setting taskmanager.network.max-num-tcp-connections to a very high number workarounds the problem

      Attachments

        1. deployment.txt
          7 kB
          Simonas
        2. image.png
          157 kB
          Vitor
        3. image-1.png
          95 kB
          Vitor
        4. image-2022-11-20-16-16-45-705.png
          95 kB
          Rui Fan
        5. image-2022-11-21-17-15-58-749.png
          157 kB
          Rui Fan
        6. job_log.txt
          3 kB
          Simonas
        7. jobmanager_config.txt
          2 kB
          Simonas
        8. jobmanager_logs.txt
          0.5 kB
          Simonas
        9. pod_restart.txt
          0.6 kB
          Simonas
        10. taskmanager_config.txt
          1 kB
          Simonas

        Issue Links

          Activity

            People

              fanrui Rui Fan
              simonas.gelazevicius@vinted.com Simonas
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: