Uploaded image for project: 'Bigtop'
  1. Bigtop
  2. BIGTOP-2576

For small clusters it is useful to turn replace-datanode-on-failure off

Agile BoardAttach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 1.1.0
    • 1.2.0
    • deployment
    • None

    Description

      As per documentation in hdfs-default.xml

      If there is a datanode/network failure in the write pipeline, DFSClient will try to 
      remove the failed datanode from the pipeline and then continue writing with the 
      remaining datanodes. As a result, the number of datanodes in the pipeline is decreased. 
      The feature is to add new datanodes to the pipeline. This is a site-wide property to 
      enable/disable the feature. When the cluster size is extremely small, e.g. 3 nodes or less, 
      cluster administrators may want to set the policy to NEVER in the default configuration 
      file or disable this feature. Otherwise, users may experience an unusually high rate of 
      pipeline failures since it is impossible to find new datanodes for replacement. See also 
      dfs.client.block.write.replace-datanode-on-failure.policy
      

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            rvs Roman Shaposhnik
            rvs Roman Shaposhnik
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment