Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.12.0, 0.13.0
-
Hive 0.12.0, Hadoop 1.1.2, Debian.
Description
Multiple connections to Hiveserver2, all of which are closed and disposed of properly show the Java heap size to grow extremely quickly.
This issue can be recreated using the following code
import java.sql.DriverManager; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.Properties; import org.apache.hive.service.cli.HiveSQLException; import org.apache.log4j.Logger; /* * Class which encapsulates the lifecycle of a query or statement. * Provides functionality which allows you to create a connection */ public class HiveClient { Connection con; Logger logger; private static String driverName = "org.apache.hive.jdbc.HiveDriver"; private String db; public HiveClient(String db) { logger = Logger.getLogger(HiveClient.class); this.db=db; try{ Class.forName(driverName); }catch(ClassNotFoundException e){ logger.info("Can't find Hive driver"); } String hiveHost = GlimmerServer.config.getString("hive/host"); String hivePort = GlimmerServer.config.getString("hive/port"); String connectionString = "jdbc:hive2://"+hiveHost+":"+hivePort +"/default"; logger.info(String.format("Attempting to connect to %s",connectionString)); try{ con = DriverManager.getConnection(connectionString,"",""); }catch(Exception e){ logger.error("Problem instantiating the connection"+e.getMessage()); } } public int update(String query) { Integer res = 0; Statement stmt = null; try{ stmt = con.createStatement(); String switchdb = "USE "+db; logger.info(switchdb); stmt.executeUpdate(switchdb); logger.info(query); res = stmt.executeUpdate(query); logger.info("Query passed to server"); stmt.close(); }catch(HiveSQLException e){ logger.info(String.format("HiveSQLException thrown, this can be valid, " + "but check the error: %s from the query %s",query,e.toString())); }catch(SQLException e){ logger.error(String.format("Unable to execute query SQLException %s. Error: %s",query,e)); }catch(Exception e){ logger.error(String.format("Unable to execute query %s. Error: %s",query,e)); } if(stmt!=null) try{ stmt.close(); }catch(SQLException e){ logger.error("Cannot close the statment, potentially memory leak "+e); } return res; } public void close() { if(con!=null){ try { con.close(); } catch (SQLException e) { logger.info("Problem closing connection "+e); } } } }
And by creating and closing many HiveClient objects. The heap space used by the hiveserver2 runjar process is seen to increase extremely quickly, without such space being released.
Attachments
Attachments
Issue Links
- is related to
-
HIVE-3098 Memory leak from large number of FileSystem instances in FileSystem.CACHE
- Closed
- relates to
-
HIVE-5268 HiveServer2 accumulates orphaned OperationHandle objects when a client fails while executing query
- Open
-
HIVE-4501 HS2 memory leak - FileSystem objects in FileSystem.CACHE
- Resolved
- links to