Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-4646

createNNProxyWithClientProtocol ignores configured timeout value



    • Bug
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 2.0.3-alpha, 2.0.4-alpha, 3.0.0-alpha1
    • 2.0.4-alpha
    • namenode
    • None
    • Linux


      The Client RPC I/O timeout mechanism appears to be configured by two core-site.xml paramters:

      1. A boolean ipc.client.ping
      2. A numeric value ipc.ping.interval

      If ipc.client.ping is true, then we send a RPC ping every ipc.ping.interval milliseconds
      If ipc.client.ping is false, then ipc.ping.interval turns into the socket timeout value.

      The bug here is that while creating a Non HA proxy, the configured timeout value is ignored, and 0 is passed in. 0 is taken to mean 'wait forever' and the client RPC socket never times out.

      Note that this bug is reproducible only in the case where the NN machine dies, i.e. the TCP stack with the NN IP address stops responding completely. The code does not take this path when you do a 'kill -9' of the NN process, since there is a TCP stack that is alive and sends out a TCP RST to the client, and that results in a socket error (not a timeout).

      The fix is to pass in the correct configured value for timeout by calling Client.getTimeout(conf) instead of passing in 0.


        1. HDFS-4646.patch
          0.9 kB
          Jagane Sundar
        2. HDFS-4646.001.patch
          0.9 kB
          Jagane Sundar



            jagane Jagane Sundar
            jagane Jagane Sundar
            0 Vote for this issue
            7 Start watching this issue