Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-9676

make maximum RPC buffer size configurable

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersCreate sub-taskConvert to sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 2.1.0-beta
    • 2.1.0-beta
    • None
    • None

    Description

      Currently the RPC server just allocates however much memory the client asks for, without validating. It would be nice to make the maximum RPC buffer size configurable. This would prevent a rogue client from bringing down the NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers. It would also make it easier to debug issues with super-large RPCs or malformed headers, since OOMs can be difficult for developers to reproduce.

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            cmccabe Colin McCabe Assign to me
            cmccabe Colin McCabe
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment