Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-5046

Hang when add/remove a datanode into/from a 2 datanode cluster

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Not A Problem
    • Affects Version/s: 1.1.1
    • Fix Version/s: None
    • Component/s: datanode
    • Labels:
      None
    • Environment:

      Red Hat Enterprise Linux Server release 5.3, 64 bit

      Description

      1. Install a Hadoop 1.1.1 cluster, with 2 datanodes: dn1 and dn2. And, in hdfs-site.xml, set the 'dfs.replication' to 2
      2. Add node dn3 into the cluster as a new datanode, and did not change the 'dfs.replication' value in hdfs-site.xml and keep it as 2
      note: step 2 passed
      3. Decommission dn3 from the cluster
      Expected result: dn3 could be decommissioned successfully
      Actual result:
      a). decommission progress hangs and the status always be 'Waiting DataNode status: Decommissioned'. But, if I execute 'hadoop dfs -setrep -R 2 /', the decommission continues and will be completed finally.
      b). However, if the initial cluster includes >= 3 datanodes, this issue won't be encountered when add/remove another datanode. For example, if I setup a cluster with 3 datanodes, and then I can successfully add the 4th datanode into it, and then also can successfully remove the 4th datanode from the cluster.

        Activity

          People

          • Assignee:
            Unassigned
            Reporter:
            sam liu
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development