Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-5616

MR Client-AppMaster RPC max retries on socket timeout is too high.

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 2.2.0, 3.0.0-alpha1
    • 2.3.0
    • client
    • None
    • Reviewed

    Description

      MAPREDUCE-3811 introduced a separate config key for overriding the max retries applied to RPC connections from the MapReduce Client to the MapReduce Application Master. This was done to make failover from the AM to the MapReduce History Server faster in the event that the AM completes while the client thinks it's still running. However, the RPC client uses a separate setting for socket timeouts, and this one is not overridden. The default for this is 45 retries with a 20-second timeout on each retry. This means that in environments subject to connection timeout instead of connection refused, the client waits 15 minutes for failover.

      Attachments

        1. MAPREDUCE-5616.1.patch
          4 kB
          Chris Nauroth

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            cnauroth Chris Nauroth
            cnauroth Chris Nauroth
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment