Hive
  1. Hive
  2. HIVE-6866

Hive server2 jdbc driver connection leak with namenode

    Details

    • Type: Bug Bug
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.11.0
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      1. Set 'ipc.client.connection.maxidletime' to 3600000 in core-site.xml and start hive-server2.
      2. Connect hive server2 repetitively in a while true loop.
      3. The tcp connection number will increase until out of memory, it seems that hive server2 will not close the connection until the time out, the error message is as the following:

      2014-03-18 23:30:36,873 ERROR ql.Driver (SessionState.java:printError(386)) - FAILED: RuntimeException java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "hdm1.hadoop.local/192.168.2.101"; destination host is: "hdm1.hadoop.local":8020;
      java.lang.RuntimeException: java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "hdm1.hadoop.local/192.168.2.101"; destination host is: "hdm1.hadoop.local":8020;
      	at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:190)
      	at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:231)
      	at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:288)
      	at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1274)
      	at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1059)
      	at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8676)
      	at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:278)
      	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:433)
      	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
      	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
      	at org.apache.hive.service.cli.operation.SQLOperation.run(SQLOperation.java:95)
      	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatement(HiveSessionImpl.java:181)
      	at org.apache.hive.service.cli.CLIService.executeStatement(CLIService.java:148)
      	at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:203)
      	at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1133)
      	at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1118)
      	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
      	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
      	at org.apache.hive.service.auth.TUGIContainingProcessor$1.run(TUGIContainingProcessor.java:40)
      	at org.apache.hive.service.auth.TUGIContainingProcessor$1.run(TUGIContainingProcessor.java:37)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at javax.security.auth.Subject.doAs(Subject.java:415)
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
      	at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:524)
      	at org.apache.hive.service.auth.TUGIContainingProcessor.process(TUGIContainingProcessor.java:37)
      	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      	at java.lang.Thread.run(Thread.java:744)
      Caused by: java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "hdm1.hadoop.local/192.168.2.101"; destination host is: "hdm1.hadoop.local":8020;
      	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:761)
      	at org.apache.hadoop.ipc.Client.call(Client.java:1239)
      	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
      	at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)
      	at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:606)
      	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
      	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
      	at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)
      	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:483)
      	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2259)
      	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2230)
      	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:540)
      	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1881)
      	at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:182)
      	... 28 more
      Caused by: java.io.IOException: Couldn't set up IO streams
      	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:662)
      	at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:253)
      	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1288)
      	at org.apache.hadoop.ipc.Client.call(Client.java:1206)
      	... 42 more
      Caused by: java.lang.OutOfMemoryError: unable to create new native thread
      	at java.lang.Thread.start0(Native Method)
      	at java.lang.Thread.start(Thread.java:713)
      	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:655)
      	... 45 more
      

        Activity

        Hide
        Ankita Bakshi added a comment -

        We are facing same issue in production. We are using CDH4.4 with Apache Hive 0.12. Is there a workaround for this issue other than restarting hiveserver2?

        Show
        Ankita Bakshi added a comment - We are facing same issue in production. We are using CDH4.4 with Apache Hive 0.12. Is there a workaround for this issue other than restarting hiveserver2?
        Hide
        Zilvinas Saltys added a comment -

        Got the same issue on CDH5. It seems that when the open files limit is set to low for hive and these connections are not closed it will start throwing this error.

        Show
        Zilvinas Saltys added a comment - Got the same issue on CDH5. It seems that when the open files limit is set to low for hive and these connections are not closed it will start throwing this error.

          People

          • Assignee:
            Unassigned
            Reporter:
            Shengjun Xin
          • Votes:
            1 Vote for this issue
            Watchers:
            9 Start watching this issue

            Dates

            • Created:
              Updated:

              Development