Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12676

when blocks has corrupted replicas,throws Exception

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: hdfs
    • Labels:
      None

      Description

      when blocks has corrupted replicas,throws Exception as follows:

      Exception 1:
      2017-10-18 15:24:55,858 WARN blockmanagement.BlockManager (BlockManager.java:createLocatedBlock(938)) - Inconsistent number of corrupt replicas for blk_1073750384_504374 blockMap has 0 but corrupt replicas map has 1
      2017-10-18 15:24:55,859 WARN ipc.Server (Server.java:logException(2433)) - IPC Server handler 116 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 10.43.160.18:56313 Call#2 Retry#-1
      java.lang.ArrayIndexOutOfBoundsException: 1
      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:972)
      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:911)
      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlockList(BlockManager.java:884)
      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1011)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2010)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1960)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1873)
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
      at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
      at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
      at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:422)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1865)
      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

      Exception 2:

      2017-10-12 16:59:36,591 INFO blockmanagement.BlockManager (BlockManager.java:computeReplicationWorkForBlocks(1649)) - Blocks chosen but could not be replicated = 4; of which 0 have no target, 4 have no source, 0 are UC, 0 are abandoned, 0 already have enough replicas.
      2017-10-12 16:59:36,809 WARN blockmanagement.BlockManager (BlockManager.java:createLocatedBlock(938)) - Inconsistent number of corrupt replicas for blk_1073789106_2278702 blockMap has 0 but corrupt replicas map has 2
      2017-10-12 16:59:36,810 WARN ipc.Server (Server.java:logException(2433)) - IPC Server handler 123 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 10.46.230.12:47974 Call#2 Retry#-1
      java.lang.NegativeArraySizeException
      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:946)
      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:911)
      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlockList(BlockManager.java:884)
      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:997)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2010)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1960)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1873)
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
      at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
      at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
      at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:422)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1865)
      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                lynnyuan lynn
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: