Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-14527

Stop all DataNodes may result in NN terminate

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.3.0, 3.1.4, 3.2.2
    • Component/s: namenode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      If we stop all datanodes of cluster, BlockPlacementPolicyDefault#chooseTarget may get ArithmeticException when calling #getMaxNodesPerRack, which throws the runtime exception out to BlockManager's ReplicationMonitor thread and then terminate the NN.
      The root cause is that BlockPlacementPolicyDefault#chooseTarget not hold the global lock, and if all DataNodes are dead between clusterMap.getNumberOfLeaves() and getMaxNodesPerRack then it meet ArithmeticException while invoke getMaxNodesPerRack.

        private DatanodeStorageInfo[] chooseTarget(int numOfReplicas,
                                          Node writer,
                                          List<DatanodeStorageInfo> chosenStorage,
                                          boolean returnChosenNodes,
                                          Set<Node> excludedNodes,
                                          long blocksize,
                                          final BlockStoragePolicy storagePolicy,
                                          EnumSet<AddBlockFlag> addBlockFlags,
                                          EnumMap<StorageType, Integer> sTypes) {
          if (numOfReplicas == 0 || clusterMap.getNumOfLeaves()==0) {
            return DatanodeStorageInfo.EMPTY_ARRAY;
          }
          ......
          int[] result = getMaxNodesPerRack(chosenStorage.size(), numOfReplicas);
          ......
      }
      

      Some detailed log show as following.

      2019-05-31 12:29:21,803 ERROR org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: ReplicationMonitor thread received Runtime exception. 
      java.lang.ArithmeticException: / by zero
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.getMaxNodesPerRack(BlockPlacementPolicyDefault.java:282)
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:228)
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:132)
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.chooseTargets(BlockManager.java:4533)
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.access$1800(BlockManager.java:4493)
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1954)
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1830)
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4453)
              at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4388)
              at java.lang.Thread.run(Thread.java:745)
      2019-05-31 12:29:21,805 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
      

      To be honest, this is not serious bug and not reprod easily, since if we stop all Datanodes and only keep NameNode lives, HDFS could be not offer service normally and we could only retrieve directory. It may be one corner case.

        Attachments

        1. HDFS-14527.001.patch
          1 kB
          Xiaoqiao He
        2. HDFS-14527.002.patch
          7 kB
          Xiaoqiao He
        3. HDFS-14527.003.patch
          7 kB
          Xiaoqiao He
        4. HDFS-14527.004.patch
          7 kB
          Xiaoqiao He
        5. HDFS-14527.005.patch
          7 kB
          Xiaoqiao He

          Activity

            People

            • Assignee:
              hexiaoqiao Xiaoqiao He
              Reporter:
              hexiaoqiao Xiaoqiao He
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: