Uploaded image for project: 'Apache HAWQ'
  1. Apache HAWQ
  2. HAWQ-882

insert failed when compresstype=gzip

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 2.0.0.0-incubating
    • None
    • None

    Description

      create sql is :
      create table testzip(a int, b varchar(20))with(appendonly=true, orientation=parquet, compresstype=gzip, compresslevel=8);

      Inserting a record at a time is no problem.
      But error inserting 10000 records at one time

      the error is:
      psycopg2.OperationalError: Parquet Storage Write error on segment file 'hdfs://kmaster:8020/hawq_default/16385/16535/16569/1': writing footer file failure (seg0 kslave1:40000 pid=790454)

      ----------------------------------------------------------------------------------------------------------------
      create sql is:create table testzip1(a int, b varchar(20))with(appendonly=true, orientation=parquet, compresstype=snappy);

      when the table's compresstype=snappy , get an error:
      ERROR: Parquet Storage Write error on segment file 'hdfs://kmaster:8020/hawq_default/16385/16535/16564/1': writing footer file failure (seg0 kslave2:40000 pid=441973)

      ----------------------------------------------------------------------------------------------------------------
      create sql is:create table testzip2(a int, b varchar(20))with(appendonly=true, compresstype=quicklz, compresslevel=1);

      when the table's compesstype=quicklz, get an error:
      ERROR: Could not flush (fsync) Append-Only segment file 'hdfs://kmaster:8020/hawq_default/16385/16535/16556/1' to disk for relation 'testquicklz': Input/output error (seg0 kslave1:40000 pid=4112)
      DETAIL:
      File /hawq_default/16385/16535/16556/1 could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1559)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3245)
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:663)
      at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
      at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
      at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:415)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)

      my hadoop have 2 nodes, version is 2.7.1.2.4
      my hawq have 2 nodes , version is 2.0.0

      Attachments

        Activity

          People

            lei_chang Lei Chang
            gliu@kingbase.com.cn liuguo
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: