Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-1290

decommissioned nodes report not consistent / clear


    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Not A Problem
    • Affects Version/s: 0.20.1
    • Fix Version/s: None
    • Component/s: namenode
    • Labels:
    • Environment:

      fedora 12


      after i add the list of decom nodes to exclude list and -refreshNodes.
      In the WebUI the decom/excluded nodes show up in both the live node list and the dead node list.

      when I do -report from the command line.
      Datanodes available: 14 (20 total, 6 dead)
      The problem here is that is only 14 nodes total including the 6 added to the exclude list.

      Now, in the node level status for each of the nodes, the excluded nodes say
      Decommission Status : Normal
      But, all the nodes say the same thing. I think if it said something like "in-progress", it would be more informative.
      note. one thing distinguishing these excluded nodes is that they all report 0 or 100% for all the values in -report.

      Cause, at this point i know from https://issues.apache.org/jira/browse/HDFS-1125 that one may have to restart the cluster to completely remove the nodes.
      But, i have no clue when i should restart.

      Ultimately, whats needed is some indication to when the decomission is complete so that all references to the excluded nodes ( from excludes, slaves ) and restart the cluster.


          Issue Links



              • Assignee:
                smartnut007 Arun Ramakrishnan
              • Votes:
                0 Vote for this issue
                6 Start watching this issue


                • Created: