Uploaded image for project: 'Bigtop'
  1. Bigtop
  2. BIGTOP-2576

For small clusters it is useful to turn replace-datanode-on-failure off

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 1.1.0
    • Fix Version/s: 1.2.0
    • Component/s: deployment
    • Labels:
      None

      Description

      As per documentation in hdfs-default.xml

      If there is a datanode/network failure in the write pipeline, DFSClient will try to 
      remove the failed datanode from the pipeline and then continue writing with the 
      remaining datanodes. As a result, the number of datanodes in the pipeline is decreased. 
      The feature is to add new datanodes to the pipeline. This is a site-wide property to 
      enable/disable the feature. When the cluster size is extremely small, e.g. 3 nodes or less, 
      cluster administrators may want to set the policy to NEVER in the default configuration 
      file or disable this feature. Otherwise, users may experience an unusually high rate of 
      pipeline failures since it is impossible to find new datanodes for replacement. See also 
      dfs.client.block.write.replace-datanode-on-failure.policy
      

        Activity

        There are no comments yet on this issue.

          People

          • Assignee:
            rvs Roman Shaposhnik
            Reporter:
            rvs Roman Shaposhnik
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development