Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-4160 Vectorized Query Execution in Hive
  3. HIVE-4745

java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • vectorization-branch
    • vectorization-branch, 0.13.0
    • None
    • None

    Description

      SELECT SUM(L_QUANTITY),
             (SUM(L_QUANTITY) + -1.30000000000000000000E+000),
             (-2.20000000000000020000E+000 % (SUM(L_QUANTITY) + -1.30000000000000000000E+000)),
             MIN(L_EXTENDEDPRICE)
      FROM   lineitem_orc
      WHERE  ((L_EXTENDEDPRICE <= L_LINENUMBER)
              OR (L_TAX > L_EXTENDEDPRICE));
      

      executed over tpch line item with scale factor 1gb

      13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
      
      Logging initialized using configuration in file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
      Hive history file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt
      Total MapReduce jobs = 1
      
      Launching Job 1 out of 1
      
      Number of reduce tasks determined at compile time: 1
      
      In order to change the average load for a reducer (in bytes):
        set hive.exec.reducers.bytes.per.reducer=<number>
      In order to limit the maximum number of reducers:
        set hive.exec.reducers.max=<number>
      In order to set a constant number of reducers:
        set mapred.reduce.tasks=<number>
      
      Starting Job = job_201306142329_0098, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
      Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill job_201306142329_0098
      Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
      2013-06-15 11:19:47,490 Stage-1 map = 0%,  reduce = 0%
      2013-06-15 11:20:29,801 Stage-1 map = 76%,  reduce = 0%
      2013-06-15 11:20:32,849 Stage-1 map = 0%,  reduce = 0%
      2013-06-15 11:20:35,880 Stage-1 map = 100%,  reduce = 100%
      Ended Job = job_201306142329_0098 with errors
      Error during job, obtaining debugging information...
      Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
      Examining task ID: task_201306142329_0098_m_000002 (and more) from job job_201306142329_0098
      
      Task with the most failures(4): 
      -----
      Task ID:
        task_201306142329_0098_m_000000
      
      URL:
        http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0098&tipid=task_201306142329_0098_m_000000
      -----
      Diagnostic Messages for this Task:
      java.lang.RuntimeException: Hive Runtime Error while closing operators
      	at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
      	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
      	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
      	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
      	at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at javax.security.auth.Subject.doAs(Subject.java:396)
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
      	at org.apache.hadoop.mapred.Child.main(Child.java:265)
      Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
      	at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(WritableDoubleObjectInspector.java:35)
      	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:340)
      	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
      	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
      	at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
      	at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
      	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
      	at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.flush(VectorGroupByOperator.java:281)
      	at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.closeOp(VectorGroupByOperator.java:423)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:196)
      	... 8 more
      
      
      FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
      MapReduce Jobs Launched: 
      Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
      Total MapReduce CPU Time Spent: 0 msec
      
      
      

      Similar issues seen with other queries:

      SELECT VAR_SAMP(L_SUPPKEY),
             (VAR_SAMP(L_SUPPKEY) - -2.20000000000000020000E+000),
             (-(VAR_SAMP(L_SUPPKEY)))
      FROM   lineitem_orc
      WHERE  ((L_SUPPKEY = -1));
      
      13/06/15 11:41:08 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
      
      Logging initialized using configuration in file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
      
      Hive history file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_6976@SLAVE23-WIN_201306151141_1255577417.txt
      
      Total MapReduce jobs = 1
      Launching Job 1 out of 1
      Number of reduce tasks determined at compile time: 1
      In order to change the average load for a reducer (in bytes):
        set hive.exec.reducers.bytes.per.reducer=<number>
      In order to limit the maximum number of reducers:
        set hive.exec.reducers.max=<number>
      In order to set a constant number of reducers:
        set mapred.reduce.tasks=<number>
      Starting Job = job_201306142329_0109, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0109
      Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill job_201306142329_0109
      Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
      2013-06-15 11:41:38,753 Stage-1 map = 0%,  reduce = 0%
      2013-06-15 11:42:17,959 Stage-1 map = 100%,  reduce = 100%
      Ended Job = job_201306142329_0109 with errors
      Error during job, obtaining debugging information...
      Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0109
      Examining task ID: task_201306142329_0109_m_000002 (and more) from job job_201306142329_0109
      
      Task with the most failures(4): 
      -----
      Task ID:
        task_201306142329_0109_m_000000
      
      URL:
        http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0109&tipid=task_201306142329_0109_m_000000
      -----
      Diagnostic Messages for this Task:
      java.lang.RuntimeException: Hive Runtime Error while closing operators
      	at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
      	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
      	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
      	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
      	at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at javax.security.auth.Subject.doAs(Subject.java:396)
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
      	at org.apache.hadoop.mapred.Child.main(Child.java:265)
      Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to [Ljava.lang.Object;
      	at org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldData(StandardStructObjectInspector.java:166)
      	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:248)
      	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:534)
      	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
      	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
      	at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
      	at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
      	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
      	at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.flush(VectorGroupByOperator.java:281)
      	at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.closeOp(VectorGroupByOperator.java:423)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:196)
      	... 8 more
      
      
      FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
      MapReduce Jobs Launched: 
      Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
      
      Total MapReduce CPU Time Spent: 0 msec
      
      
      SELECT MAX(L_ORDERKEY),
             (MAX(L_ORDERKEY) + 2)
      FROM   lineitem_orc
      WHERE  ((L_ORDERKEY <= L_DISCOUNT));
      
      13/06/15 11:55:02 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
      
      Logging initialized using configuration in file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
      
      
      Hive history file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_3208@SLAVE23-WIN_201306151155_919702960.txt
      
      
      Total MapReduce jobs = 1
      
      
      Launching Job 1 out of 1
      
      Number of reduce tasks determined at compile time: 1
      In order to change the average load for a reducer (in bytes):
        set hive.exec.reducers.bytes.per.reducer=<number>
      In order to limit the maximum number of reducers:
        set hive.exec.reducers.max=<number>
      In order to set a constant number of reducers:
        set mapred.reduce.tasks=<number>
      
      Starting Job = job_201306142329_0114, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0114
      
      
      Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill job_201306142329_0114
      
      
      Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
      
      
      2013-06-15 11:55:34,191 Stage-1 map = 0%,  reduce = 0%
      
      
      2013-06-15 11:56:14,464 Stage-1 map = 100%,  reduce = 100%
      
      
      Ended Job = job_201306142329_0114 with errors
      
      
      Error during job, obtaining debugging information...
      
      
      Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0114
      
      
      Examining task ID: task_201306142329_0114_m_000002 (and more) from job job_201306142329_0114
      
      
      
      Task with the most failures(4): 
      -----
      Task ID:
        task_201306142329_0114_m_000000
      
      URL:
        http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0114&tipid=task_201306142329_0114_m_000000
      -----
      Diagnostic Messages for this Task:
      java.lang.RuntimeException: Hive Runtime Error while closing operators
      	at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
      	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
      	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
      	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
      	at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at javax.security.auth.Subject.doAs(Subject.java:396)
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
      	at org.apache.hadoop.mapred.Child.main(Child.java:265)
      Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.io.LongWritable
      	at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableLongObjectInspector.get(WritableLongObjectInspector.java:35)
      	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:325)
      	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
      	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
      	at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
      	at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
      	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
      	at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.flush(VectorGroupByOperator.java:281)
      	at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.closeOp(VectorGroupByOperator.java:423)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
      	at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:196)
      	... 8 more
      
      
      
      
      FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
      
      MapReduce Jobs Launched: 
      Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
      Total MapReduce CPU Time Spent: 0 msec
      

      Attachments

        1. HIVE-4745.2.patch
          35 kB
          Jitendra Nath Pandey
        2. HIVE-4745.3.patch
          35 kB
          Remus Rusanu

        Activity

          People

            jnp Jitendra Nath Pandey
            anthony.murphy Tony Murphy
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: