Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-40230

Executor connection issue in hybrid cloud deployment

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.2.1
    • None
    • Block Manager, Kubernetes
    • None

    Description

      I understand that the issue is quite subtle and might be hard to debug, still I was not able to find issue with our infra, so I guess that is something inside the spark.

      We deploy spark application in k8s and everything works well, if all the driver & executor pods are either in AWS or our DC, but in case they are split between datacenters something strange happens, for example, logs of one of the executors inside the DC

      22/08/26 07:55:35 INFO TransportClientFactory: Successfully created connection to /172.19.149.92:39414 after 50 ms (1 ms spent in bootstraps)
      22/08/26 07:55:35 TRACE TransportClient: Sending RPC to /172.19.149.92:39414
      22/08/26 07:55:35 TRACE TransportClient: Sending request RPC 4860401977118244334 to /172.19.149.92:39414 took 3 ms
      22/08/26 07:55:35 DEBUG TransportClient: Sending fetch chunk request 0 to /172.19.149.92:39414
      22/08/26 07:55:35 TRACE TransportClient: Sending request StreamChunkId[streamId=1644979023003,chunkIndex=0] to /172.19.149.92:39414 took 0 ms
      22/08/26 07:57:35 ERROR TransportChannelHandler: Connection to /172.19.149.92:39414 has been quiet for 120000 ms while there are outstanding requests. Assuming connection is dead; please adjust spark.shuffle.io.connectionTimeout if this is wrong. 

      The executor successfully creates connection & sends the request, but the connection was assumed dead. Even stranger the executor on ip 172.19.149.92 have sent the response back, which I can confirm with following logs

      22/08/26 07:55:35 TRACE MessageDecoder: Received message ChunkFetchRequest: ChunkFetchRequest[streamChunkId=StreamChunkId[streamId=1644979023003,chunkIndex=0]]
      22/08/26 07:55:35 TRACE ChunkFetchRequestHandler: Received req from /172.19.123.197:37626 to fetch block StreamChunkId[streamId=1644979023003,chunkIndex=0]
      22/08/26 07:55:35 TRACE OneForOneStreamManager: Removing stream id 1644979023003
      22/08/26 07:55:35 TRACE BlockInfoManager: Task -1024 releasing lock for broadcast_0_piece0
      --
      22/08/26 07:55:35 TRACE BlockInfoManager: Task -1024 releasing lock for broadcast_0_piece0
      22/08/26 07:55:35 TRACE ChunkFetchRequestHandler: Sent result ChunkFetchSuccess[streamChunkId=StreamChunkId[streamId=1644979023003,chunkIndex=0],buffer=org.apache.spark.storage.BlockManagerManagedBuffer@79b43e2a] to client /172.19.123.197:37626 

      A few suspicious moments here:

      • connection to pod looks like /<IP>, while connection to driver looks like <POD_NAME>.<NAMESPACE>.svc/<IP>
      • Task -1024 releasing lock for broadcast_0_piece0

      Attachments

        Activity

          People

            Unassigned Unassigned
            gabroskin Gleb Abroskin
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: