Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-5662

Can't decommission a DataNode due to file's replication factor larger than the rest of the cluster size

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 2.3.0
    • namenode
    • None
    • Reviewed

    Description

      A datanode can't be decommissioned if it has replica that belongs to a file with a replication factor larger than the rest of the cluster size.

      One way to fix this is to have some kind of minimum replication factor setting and thus any datanode can be decommissioned regardless of the largest replication factor it's related to.

      Attachments

        1. HDFS-5662.001.patch
          6 kB
          Brandon Li
        2. HDFS-5662.002.patch
          7 kB
          Brandon Li
        3. HDFS-5662.branch2.3.patch
          7 kB
          Brandon Li

        Activity

          People

            brandonli Brandon Li
            brandonli Brandon Li
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: