Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-14360

some excptioins happened while using ISA-L

    Details

    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: ec, erasure-coding
    • Labels:
      None

      Description

      I built my hadoop with isa-l supported. When I try to so some convert job, exception happens.   

      //代码占位符
      [2019-03-12T11:39:03.183+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
      [2019-03-12T11:39:03.184+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # A fatal error has been detected by the Java Runtime Environment: 
      [2019-03-12T11:39:03.184+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
      [2019-03-12T11:39:03.184+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # SIGSEGV (0xb) at pc=0x00007fc42e182683, pid=17110, tid=0x00007fc40ce9f700 
      [2019-03-12T11:39:03.184+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
      [2019-03-12T11:39:03.184+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # JRE version: Java(TM) SE Runtime Environment (8.0_121-b13) (build 1.8.0_121-b13) 
      [2019-03-12T11:39:03.184+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.121-b13 mixed mode linux-amd64 compressed oops) 
      [2019-03-12T11:39:03.184+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # Problematic frame: 
      [2019-03-12T11:39:03.184+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # V [libjvm.so+0x9bd683] SafepointSynchronize::begin()+0x263 
      [2019-03-12T11:39:03.185+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
      [2019-03-12T11:39:03.185+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again 
      [2019-03-12T11:39:03.185+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
      [2019-03-12T11:39:03.185+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # An error report file with more information is saved as: 
      [2019-03-12T11:39:03.185+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # /software/servers/hadoop-2.7.1/hs_err_pid17110.log 
      [2019-03-12T11:39:03.191+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
      [2019-03-12T11:39:03.191+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # If you would like to submit a bug report, please visit: 
      [2019-03-12T11:39:03.191+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # http://bugreport.java.com/bugreport/crash.jsp 
      [2019-03-12T11:39:03.191+08:00] [INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
      [2019-03-12T11:39:07.949+08:00] [ERROR] [pool-10-thread-1] : copy file /test/zhanglin/1g to /test/ttlconverter/factory/test/zhanglin/1gfailed [2019-03-12T11:39:07.949+08:00] [INFO] [DataXceiver for client DFSClient_NONMAPREDUCE_1740978034_1 at /172.22.176.69:40662 [Receiving block BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009]] : Exception for BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:212) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:529) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:972) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:891) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:171) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:105) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:745) [2019-03-12T11:39:07.951+08:00] [INFO] [DataXceiver for client DFSClient_NONMAPREDUCE_1740978034_1 at /172.22.176.69:40660 [Sending block BP-442378117-172.16.150.142-1552360340470:blk_1073741825_1001]] : Scheduling a check for /data0/dfs [2019-03-12T11:39:07.954+08:00] [INFO] [PacketResponder: BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009, type=LAST_IN_PIPELINE] : PacketResponder: BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009, type=LAST_IN_PIPELINE: Thread is interrupted. [2019-03-12T11:39:07.954+08:00] [INFO] [PacketResponder: BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009, type=LAST_IN_PIPELINE] : PacketResponder: BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009, type=LAST_IN_PIPELINE terminating [2019-03-12T11:39:07.954+08:00] [INFO] [DataXceiver for client DFSClient_NONMAPREDUCE_1740978034_1 at /172.22.176.69:40662 [Receiving block BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009]] : opWriteBlock BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009 received exception java.io.IOException: Premature EOF from inputStream [2019-03-12T11:39:07.957+08:00] [ERROR] [DataXceiver for client DFSClient_NONMAPREDUCE_1740978034_1 at /172.22.176.69:40662 [Receiving block BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009]] : A01-R02-I176-69-4CY8S12.JD.LOCAL:50010:DataXceiver error processing WRITE_BLOCK operation src: /172.22.176.69:40662 dst: /172.22.176.69:50010 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:212) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:529) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:972) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:891) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:171) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:105) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:745)
      

       

      Here is my build cmd

      //代码占位符
      mvn clean package -Pdist -Pnative -DskipTests -Dmaven.javadoc.skip=true -Dtar -Dcontainer-executor.conf.dir=/etc/yarn-executor/ -Drequire.snappy -Dsnappy.prefix=/data0/snappy/ -Drequire.isal=true -Disal.prefix=/usr/include -Disal.lib=/usr/lib64/ -Dbundle.isal=true

       

      I checked my native env, which showed below

      //代码占位符
      
      Native library checking:
      
      hadoop: true /software/servers/hadoop-2.7.1/lib/native/libhadoop.so.1.0.0
      
      zlib: true /lib64/libz.so.1
      
      snappy: true /lib64/libsnappy.so.1
      
      lz4: true revision:99
      
      bzip2: true /lib64/libbz2.so.1
      
      openssl: true /lib64/libcrypto.so
      
      ISA-L: true /software/servers/hadoop-2.7.1/lib/native/libisal.so.2

       

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              zzachimonde Lin Zhang
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated: