Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-1300

Decommissioning nodes does not increase replication priority

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 0.20.1, 0.20.2, 0.20.3, 0.20-append, 0.21.0, 0.22.0
    • None
    • None
    • None

    Description

      Currently when you decommission a node each block is only inserted into neededReplications if it is not there yet. This causes a problem of a block sitting in a low priority queue when all replicas sit on the nodes being decommissioned.
      The common usecase for decommissioning nodes for us is proactively exclude them before they went bad, so it would be great to get the blocks at risk onto the live datanodes as quickly as possible.

      Attachments

        1. HDFS-1300.2.patch
          1 kB
          Dmytro Molkov
        2. HDFS-1300.3.patch
          7 kB
          Dmytro Molkov
        3. HDFS-1300.patch
          7 kB
          Dmytro Molkov

        Activity

          People

            dms Dmytro Molkov
            dms Dmytro Molkov
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated: