Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-5662

Can't decommission a DataNode due to file's replication factor larger than the rest of the cluster size

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.3.0
    • Component/s: namenode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      A datanode can't be decommissioned if it has replica that belongs to a file with a replication factor larger than the rest of the cluster size.

      One way to fix this is to have some kind of minimum replication factor setting and thus any datanode can be decommissioned regardless of the largest replication factor it's related to.

        Attachments

        1. HDFS-5662.001.patch
          6 kB
          Brandon Li
        2. HDFS-5662.002.patch
          7 kB
          Brandon Li
        3. HDFS-5662.branch2.3.patch
          7 kB
          Brandon Li

          Activity

            People

            • Assignee:
              brandonli Brandon Li
              Reporter:
              brandonli Brandon Li
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: