Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-2870

Datanode.shutdown() and Namenode.stop() should close all rpc connections

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.16.0
    • 0.17.0
    • ipc
    • None

    Description

      Currently this two cleanup methods do not close all existing rpc connections. If a mini dfs cluster gets shutdown and then restarted as we do in TestFileCreation, RPCs in second mini cluster reuse the unclosed connections opened in the first run but there is no server running to serve the request. So the client get stuck waiting for the response forever if client side timeout gets removed as suggested by hadoop-2811.

      Attachments

        1. closeConnection4.patch
          25 kB
          Hairong Kuang
        2. closeConnection3.patch
          25 kB
          Hairong Kuang
        3. closeConnection2.patch
          25 kB
          Hairong Kuang
        4. closeConnection1.patch
          25 kB
          Hairong Kuang
        5. closeConnection.patch
          22 kB
          Hairong Kuang

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            hairong Hairong Kuang
            hairong Hairong Kuang
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment