Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-5579

Under construction files make DataNode decommission take very long hours

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 1.2.0, 2.2.0
    • 2.3.0
    • namenode
    • None
    • Reviewed

    Description

      We noticed that some times decommission DataNodes takes very long time, even exceeds 100 hours.
      After check the code, I found that in BlockManager:computeReplicationWorkForBlocks(List<List<Block>> blocksToReplicate) it won't replicate blocks which belongs to under construction files, however in BlockManager:isReplicationInProgress(DatanodeDescriptor srcNode), if there is block need replicate no matter whether it belongs to under construction or not, the decommission progress will continue running.
      That's the reason some time the decommission takes very long time.

      Attachments

        1. HDFS-5579.patch
          7 kB
          yunjiong zhao
        2. HDFS-5579-branch-1.2.patch
          3 kB
          yunjiong zhao

        Issue Links

          Activity

            People

              zhaoyunjiong yunjiong zhao
              zhaoyunjiong yunjiong zhao
              Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: