Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-14437

Spark using Netty RPC gets wrong address in some setups

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.6.0, 1.6.1
    • 2.0.0
    • Block Manager, Spark Core
    • None
    • AWS, Docker, Flannel

    Description

      Netty can't get the correct origin address in certain network setups. Spark should handle this, as relying on Netty correctly reporting all addresses leads to incompatible and unpredictable network states. We're currently using Docker with Flannel on AWS. Container communication looks something like: Container 1 (1.2.3.1) -> Docker host A (1.2.3.0) -> Docker host B (4.5.6.0) -> Container 2 (4.5.6.1)

      If the client in that setup is Container 1 (1.2.3.4), Netty channels from there to Container 2 will have a client address of 1.2.3.0.

      The RequestMessage object that is sent over the wire already contains a senderAddress field that the sender can use to specify their address. In NettyRpcEnv#internalReceive, this is replaced with the Netty client socket address when null. senderAddress in the messages sent from the executors is currently always null, meaning all messages will have these incorrect addresses (we've switched back to Akka as a temporary workaround for this). The executor should send its address explicitly so that the driver doesn't attempt to infer addresses based on possibly incorrect information from Netty.

      Attachments

        Issue Links

          Activity

            People

              zsxwing Shixiong Zhu
              hogeland Kevin Hogeland
              Votes:
              1 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: