Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-21271

"Error in acquiring locks" when querying a large partitioned table

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Open
    • Priority: Minor
    • Resolution: Unresolved
    • Affects Version/s: 3.1.0
    • Fix Version/s: None
    • Component/s: Standalone Metastore
    • Labels:
      None
    • Environment:

      Hortonworks HDP 3.1 using Azure SQL as metastore

      Description

      When querying a large partitioned table, acquiring lock on partitions fails due to limitation of the RDBMS used as metastore in total number of parameters it can accept.

      Specifically, when MSSQL server is used as metastore, since it only allows 2100 parameters in a request, it causes failure in enqueueLockWithRetry.

      Table in question has only one level of partition which is epoch of the start of each day. When querying (simple SELECT COUNT(1)) for one whole year, it fails with this error, but shorter date ranges are fine.

      2019-02-14T23:37:15,688 INFO  [pool-7-thread-189]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:aggrColStatsForPartitions(1485)) - useDensityFunctionForNDVEstimation = false
      
      partsFound = 0
      
      ColumnStatisticsObj = []
      
      2019-02-14T23:37:16,134 INFO  [pool-7-thread-189]: txn.TxnHandler (TxnHandler.java:checkRetryable(3723)) - Non-retryable error in enqueueLockWithRetry(LockRequest(component:[LockComponent(type:SHARED_READ, level:PARTITION, dbname:default, tablename:trips, partitionname:tripstartday=1523664000, operationType:SELECT, isTransactional:true), LockComponent(type:SHARED_READ, level:PARTITION, dbname:default, tablename:trips, partitionname:tripstartday=1538265600, operationType:SELECT, isTransactional:true), LockComponent(type:SHARED_READ, level:PARTITION, dbname:default, tablename:trips, partitionname:tripstartday=152012......
      ....
      : The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request. (SQLState=S0001, ErrorCode=8003)
      
      2019-02-14T23:37:16,136 ERROR [pool-7-thread-189]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request.
      
      at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:254)
      
      at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1608)
      
      at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:578)
      
      at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:508)
      
      at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7240)
      
      at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2869)
      
      at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:243)
      
      at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:218)
      
      at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeUpdate(SQLServerPreparedStatement.java:461)
      
      at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
      
      at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
      
      at org.apache.hadoop.hive.metastore.txn.TxnHandler.enqueueLockWithRetry(TxnHandler.java:2421)
      
      at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:2168)
      
      at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:7598)
      
      at sun.reflect.GeneratedMethodAccessor92.invoke(Unknown Source)
      
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      
      at java.lang.reflect.Method.invoke(Method.java:498)
      
      at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
      
      at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
      
      at com.sun.proxy.$Proxy31.lock(Unknown Source)
      
      at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$lock.getResult(ThriftHiveMetastore.java:18738)
      
      at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$lock.getResult(ThriftHiveMetastore.java:18722)
      
      at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
      
      at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:111)
      
      at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107)
      
      at java.security.AccessController.doPrivileged(Native Method)
      
      at javax.security.auth.Subject.doAs(Subject.java:422)
      
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
      
      at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:119)
      
      at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
      
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      
      at java.lang.Thread.run(Thread.java:745)
      
      )
      
      at org.apache.hadoop.hive.metastore.txn.TxnHandler.enqueueLockWithRetry(TxnHandler.java:2431)
      
      at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:2168)
      
      at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:7598)
      
      at sun.reflect.GeneratedMethodAccessor92.invoke(Unknown Source)
      
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      
      at java.lang.reflect.Method.invoke(Method.java:498)
      
      at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
      
      at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
      
      at com.sun.proxy.$Proxy31.lock(Unknown Source)
      
      at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$lock.getResult(ThriftHiveMetastore.java:18738)
      
      at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$lock.getResult(ThriftHiveMetastore.java:18722)
      
      at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
      
      at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:111)
      
      at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107)
      
      at java.security.AccessController.doPrivileged(Native Method)
      
      at javax.security.auth.Subject.doAs(Subject.java:422)
      
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
      
      at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:119)
      
      at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
      
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      
      at java.lang.Thread.run(Thread.java:745)

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              mazar Massoud Mazar
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated: