Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-338

When a block is severely under replicated at creation time, a request for block replication should be scheduled immediately

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • None
    • None
    • None

    Description

      During writing a block to data nodes, if the dfs client detects a bad data node in the write pipeline, it will re-construct a new data pipeline,
      excluding the detected bad data node. This implies that when the client finishes writing the block, the number of the replicas for the block
      may be lower than the intended replication factor. If the ratio of the number of replicas to the intended replication factor is lower than
      certain threshold (say 0.68), then the client should send a request to the name node to replicate that block immediately.

      Attachments

        Activity

          People

            Unassigned Unassigned
            runping Runping Qi
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: