Uploaded image for project: 'ORC'
  1. ORC
  2. ORC-237

OrcFile.mergeFiles Specified block size is less than configured minimum value

    Details

    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 1.4.0
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Environment:
      Hadoop 2.7.3,
      jdk 1.8.0_121

      Description

      impl.PhysicalFsWriter: ORC writer created for path: /dw/ods/order_orc/success/dt=2017-06-28_tmp/part-m-00000.orc with stripeSize: 67108864 blockSize: 131072 compression: ZLIB bufferSize: 131072

      Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 131072 < 1048576

        Activity

        Hide
        PIPE EbCead added a comment -

        17/09/12 19:28:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
        17/09/12 19:28:11 INFO security.UserGroupInformation: Login successful for user xx using keytab file /Users/pipe/xx.keytab
        17/09/12 19:28:12 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
        17/09/12 19:28:12 INFO orc.OrcMainTest: Input file list:===:hdfs://a1.dm.ambari:8020/dw/ods/order_orc/success/dt=2017-06-28/part-m-00000.orc
        17/09/12 19:28:12 INFO orc.OrcMainTest: Input file size:===:9304992
        17/09/12 19:28:12 INFO orc.OrcMainTest: Input file list:===:hdfs://a1.dm.ambari:8020/dw/ods/order_orc/success/dt=2017-06-28/part-m-00001.orc
        17/09/12 19:28:12 INFO orc.OrcMainTest: Input file size:===:9616657
        17/09/12 19:28:12 INFO orc.OrcMainTest: Input file list:===:hdfs://a1.dm.ambari:8020/dw/ods/order_orc/success/dt=2017-06-28/part-m-00002.orc
        17/09/12 19:28:12 INFO orc.OrcMainTest: Input file size:===:3188876
        17/09/12 19:28:12 INFO impl.OrcCodecPool: Got brand-new codec ZLIB
        17/09/12 19:28:12 INFO impl.PhysicalFsWriter: ORC writer created for path: /dw/ods/order_orc/success/dt=2017-06-28_tmp/part-m-00000.orc with stripeSize: 67108864 blockSize: 131072 compression: ZLIB bufferSize: 131072

        org.apache.hadoop.ipc.RemoteException(java.io.IOException): Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2600)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2555)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:735)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:408)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

        at org.apache.hadoop.ipc.Client.call(Client.java:1475)
        at org.apache.hadoop.ipc.Client.call(Client.java:1412)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy15.create(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy16.create(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1648)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1689)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1624)
        at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
        at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
        at org.apache.orc.impl.PhysicalFsWriter.<init>(PhysicalFsWriter.java:91)
        at org.apache.orc.impl.WriterImpl.<init>(WriterImpl.java:184)
        at org.apache.orc.OrcFile.createWriter(OrcFile.java:685)
        at org.apache.orc.OrcFile.mergeFiles(OrcFile.java:830)

        Show
        PIPE EbCead added a comment - 17/09/12 19:28:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/09/12 19:28:11 INFO security.UserGroupInformation: Login successful for user xx using keytab file /Users/pipe/xx.keytab 17/09/12 19:28:12 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 17/09/12 19:28:12 INFO orc.OrcMainTest: Input file list:===:hdfs://a1.dm.ambari:8020/dw/ods/order_orc/success/dt=2017-06-28/part-m-00000.orc 17/09/12 19:28:12 INFO orc.OrcMainTest: Input file size:===:9304992 17/09/12 19:28:12 INFO orc.OrcMainTest: Input file list:===:hdfs://a1.dm.ambari:8020/dw/ods/order_orc/success/dt=2017-06-28/part-m-00001.orc 17/09/12 19:28:12 INFO orc.OrcMainTest: Input file size:===:9616657 17/09/12 19:28:12 INFO orc.OrcMainTest: Input file list:===:hdfs://a1.dm.ambari:8020/dw/ods/order_orc/success/dt=2017-06-28/part-m-00002.orc 17/09/12 19:28:12 INFO orc.OrcMainTest: Input file size:===:3188876 17/09/12 19:28:12 INFO impl.OrcCodecPool: Got brand-new codec ZLIB 17/09/12 19:28:12 INFO impl.PhysicalFsWriter: ORC writer created for path: /dw/ods/order_orc/success/dt=2017-06-28_tmp/part-m-00000.orc with stripeSize: 67108864 blockSize: 131072 compression: ZLIB bufferSize: 131072 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 131072 < 1048576 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2600) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2555) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:735) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:408) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345) at org.apache.hadoop.ipc.Client.call(Client.java:1475) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy15.create(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy16.create(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1648) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1689) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1624) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892) at org.apache.orc.impl.PhysicalFsWriter.<init>(PhysicalFsWriter.java:91) at org.apache.orc.impl.WriterImpl.<init>(WriterImpl.java:184) at org.apache.orc.OrcFile.createWriter(OrcFile.java:685) at org.apache.orc.OrcFile.mergeFiles(OrcFile.java:830)
        Hide
        PIPE EbCead added a comment - - edited
        public static void doMerge(Configuration configuration, String inputPath, String outputPath, Log LOG) throws Exception {
                Path oPath = new Path(outputPath);
        
                FileSystem fileSystem = FileSystem.get(configuration);
                if (fileSystem.exists(oPath)) {
                    fileSystem.delete(oPath, true);
                }
        
                OrcFile.WriterOptions writerOptions = OrcFile.writerOptions(configuration);
                List<Path> pathList = FileSystemUtil.getFiles(fileSystem, inputPath, LOG);
                OrcFile.mergeFiles(oPath, writerOptions, pathList);
        }
        
        Show
        PIPE EbCead added a comment - - edited public static void doMerge(Configuration configuration, String inputPath, String outputPath, Log LOG) throws Exception { Path oPath = new Path(outputPath); FileSystem fileSystem = FileSystem.get(configuration); if (fileSystem.exists(oPath)) { fileSystem.delete(oPath, true ); } OrcFile.WriterOptions writerOptions = OrcFile.writerOptions(configuration); List<Path> pathList = FileSystemUtil.getFiles(fileSystem, inputPath, LOG); OrcFile.mergeFiles(oPath, writerOptions, pathList); }

          People

          • Assignee:
            Unassigned
            Reporter:
            PIPE EbCead
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:

              Development