Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-1448

Setting the replication factor of a file too high causes namenode cpu overload

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersCreate sub-taskConvert to sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.13.0
    • 0.14.0
    • None
    • None

    Description

      The replication factor of a file in set to 300 (on a 800 node cluster). Then all mappers try to open this file. For every open call that the namenode receives from each of these 800 clients, it sorts all the replicas of the block(s) based on the distance from the client. This causes CPU usage overload on the namenode.

      One proposal is to make the namenode return a non-sorted list of datanodes to the client. Information about each replica also contains the rack on which that replica resides. The client can look at the replicas to determine if there is a copy on the local node. If not, then it can find out if there is a replica on the local rack. If not then it can choose a replica at random.

      This proposal is scalable because the sorting and selection of replicas is done by the client rather than the Namenode.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            hairong Hairong Kuang Assign to me
            dhruba Dhruba Borthakur
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment