Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-5616

MR Client-AppMaster RPC max retries on socket timeout is too high.

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.2.0, 3.0.0-alpha1
    • Fix Version/s: 2.3.0
    • Component/s: client
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      MAPREDUCE-3811 introduced a separate config key for overriding the max retries applied to RPC connections from the MapReduce Client to the MapReduce Application Master. This was done to make failover from the AM to the MapReduce History Server faster in the event that the AM completes while the client thinks it's still running. However, the RPC client uses a separate setting for socket timeouts, and this one is not overridden. The default for this is 45 retries with a 20-second timeout on each retry. This means that in environments subject to connection timeout instead of connection refused, the client waits 15 minutes for failover.

        Attachments

        1. MAPREDUCE-5616.1.patch
          4 kB
          Chris Nauroth

          Issue Links

            Activity

              People

              • Assignee:
                cnauroth Chris Nauroth
                Reporter:
                cnauroth Chris Nauroth
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: