Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13833

Improve BlockPlacementPolicyDefault's consider load logic

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.2.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      I'm having a random problem with blocks replication with Hadoop 2.6.0-cdh5.15.0
      With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21

       

      In my case we are getting this error very randomly (after some hours) and with only one Datanode (for now, we are trying this cloudera cluster for a POC)
      Here is the Log.

      Choosing random from 1 available nodes on node /default, scope=/default, excludedScope=null, excludeNodes=[]
      2:38:20.527 PM	DEBUG	NetworkTopology	
      Choosing random from 0 available nodes on node /default, scope=/default, excludedScope=null, excludeNodes=[192.168.220.53:50010]
      2:38:20.527 PM	DEBUG	NetworkTopology	
      chooseRandom returning null
      2:38:20.527 PM	DEBUG	BlockPlacementPolicy	
      [
      Node /default/192.168.220.53:50010 [
        Datanode 192.168.220.53:50010 is not chosen since the node is too busy (load: 8 > 0.0).
      2:38:20.527 PM	DEBUG	NetworkTopology	
      chooseRandom returning 192.168.220.53:50010
      2:38:20.527 PM	INFO	BlockPlacementPolicy	
      Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
      2:38:20.527 PM	DEBUG	StateChange	
      closeFile: /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9 with 1 blocks is persisted to the file system
      2:38:20.527 PM	DEBUG	StateChange	
      *BLOCK* NameNode.addBlock: file /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660 fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
      2:38:20.527 PM	DEBUG	BlockPlacementPolicy	
      Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
      org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: 
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395)
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270)
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142)
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158)
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715)
      	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505)
      	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
      	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
      	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
      	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
      	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
      	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
      	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281)
      	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at javax.security.auth.Subject.doAs(Subject.java:422)
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
      	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275)
      
      

      This part makes no sense at all:

      load: 8 > 0.0

        Attachments

        1. HDFS-13833.001.patch
          5 kB
          Shweta
        2. HDFS-13833.002.patch
          5 kB
          Shweta
        3. HDFS-13833.003.patch
          5 kB
          Shweta
        4. HDFS-13833.004.patch
          5 kB
          Shweta
        5. HDFS-13833.005.patch
          5 kB
          Shweta

          Issue Links

            Activity

              People

              • Assignee:
                shwetayakkali Shweta
                Reporter:
                rikeppb100 Henrique Barros
              • Votes:
                0 Vote for this issue
                Watchers:
                11 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: