Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-10131

NetWorkTopology#countNumOfAvailableNodes() is returning wrong value if excluded nodes passed are not part of the cluster tree



    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 2.0.5-alpha, 3.0.0-alpha1
    • 2.6.0
    • None
    • None
    • Reviewed


      I got "File /hdfs_COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation." in the following case

      1. 2 DNs cluster,
      2. One of the datanodes was not responding from last 10 min, but about to detect as dead at NN.
      3. Tried to write one file, for the block NN allocated both DNs.
      4. Client While creating the pipeline took some time to detect one node failure.
      5. Before client detects pipeline failure, NN side dead node was removed from cluster map.
      6. Now, client has abandoned previous block and asked for new block with dead node in excluded list and got above exception even though one more node was available live.

      When I dig this more, found that,
      NetWorkTopology#countNumOfAvailableNodes() is not giving correct count when the excludeNodes passed from client are not part of the cluster map.

      Adding to this one more case where count is wrong.
      1. If there is no node present for the normalized scope in cluster.


        1. HDFS-5112.patch
          4 kB
          Vinayakumar B
        2. HADOOP-10131-002.patch
          7 kB
          Vinayakumar B
        3. HADOOP-10131.patch
          6 kB
          Vinayakumar B

        Issue Links



              vinayakumarb Vinayakumar B
              vinayakumarb Vinayakumar B
              0 Vote for this issue
              5 Start watching this issue