Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-15904

select query throwing Null Pointer Exception from org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 2.2.0
    • HiveServer2
    • None

    Description

      Following query failing with Null Pointer Exception from org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan

      Attaching create table statements for table_1 and table_18

      Query:
      SELECT
      COALESCE(498, LEAD(COALESCE(-973, -684, 515)) OVER (PARTITION BY (t2.int_col_10 + t1.smallint_col_50) ORDER BY (t2.int_col_10 + t1.smallint_col_50), FLOOR(t1.double_col_16) DESC), 524) AS int_col,
      (t2.int_col_10) + (t1.smallint_col_50) AS int_col_1,
      FLOOR(t1.double_col_16) AS float_col,
      COALESCE(SUM(COALESCE(62, -380, -435)) OVER (PARTITION BY (t2.int_col_10 + t1.smallint_col_50) ORDER BY (t2.int_col_10 + t1.smallint_col_50) DESC, FLOOR(t1.double_col_16) DESC ROWS BETWEEN UNBOUNDED PRECEDING AND 48 FOLLOWING), 704) AS int_col_2
      FROM table_1 t1
      INNER JOIN table_18 t2 ON (((t2.tinyint_col_15) = (t1.bigint_col_7)) AND
      ((t2.decimal2709_col_9) = (t1.decimal2016_col_26))) AND
      ((t2.tinyint_col_20) = (t1.tinyint_col_3))
      WHERE (t2.smallint_col_19) IN (SELECT
      COALESCE(-92, -994) AS int_col
      FROM table_1 tt1
      INNER JOIN table_18 tt2 ON (tt2.decimal1911_col_16) = (tt1.decimal2612_col_77)
      WHERE (t1.timestamp_col_9) = (tt2.timestamp_col_18));

      Error Stack:

      org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: NullPointerException null
      at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:387)
      at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:193)
      at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:276)
      at org.apache.hive.service.cli.operation.Operation.run(Operation.java:324)
      at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:507)
      at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:495)
      at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:308)
      at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:506)
      at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1437)
      at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1422)
      at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
      at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
      at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:599)
      at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_112]
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_112]
      at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
      Caused by: java.lang.NullPointerException
      at org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:402)
      at org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226)
      at org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
      at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
      at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
      at org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74)
      at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
      at org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:358)
      at org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:90)
      at org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:134)
      at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11126)
      at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:288)
      at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257)
      at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:447)
      at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329)
      at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1189)
      at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1176)
      at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:191)
      ... 15 more

      Attachments

        1. HIVE-15904.6.patch
          20 kB
          Jason Dere
        2. HIVE-15904.5.patch
          20 kB
          Jason Dere
        3. HIVE-15904.4.patch
          36 kB
          Jason Dere
        4. HIVE-15904.3.patch
          58 kB
          Deepak Jaiswal
        5. HIVE-15904.2.patch
          4 kB
          Deepak Jaiswal
        6. HIVE-15904.1.patch
          4 kB
          Deepak Jaiswal
        7. table_1.q
          3 kB
          Aswathy Chellammal Sreekumar
        8. table_18.q
          0.9 kB
          Aswathy Chellammal Sreekumar

        Activity

          People

            jdere Jason Dere
            asreekumar Aswathy Chellammal Sreekumar
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: