Hive
  1. Hive
  2. HIVE-5845

CTAS failed on vectorized code path

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.13.0
    • Component/s: None
    • Labels:
      None

      Description

      Following query fails:
      create table store_sales_2 stored as orc as select * from alltypesorc;

      1. HIVE-5845.1.patch
        52 kB
        Remus Rusanu

        Issue Links

          Activity

          Hide
          Ashutosh Chauhan added a comment -

          Stack-trace:

          Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.orc.OrcStruct cannot be cast to [Ljava.lang.Object;
                  at org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldData(StandardStructObjectInspector.java:173)
                  at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.write(WriterImpl.java:1349)
                  at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:1962)
                  at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:78)
                  at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.processOp(VectorFileSinkOperator.java:159)
                  at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
                  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
                  at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:129)
                  at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
                  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
                  at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:91)
                  at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
                  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
                  at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43)
          
          Show
          Ashutosh Chauhan added a comment - Stack-trace: Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.orc.OrcStruct cannot be cast to [Ljava.lang. Object ; at org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldData(StandardStructObjectInspector.java:173) at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.write(WriterImpl.java:1349) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:1962) at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:78) at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.processOp(VectorFileSinkOperator.java:159) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:129) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:91) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43)
          Hide
          Remus Rusanu added a comment -

          The VectorFileSinkOperator uses an optimized path for the VectorizedSerde which creates OrcStruct values, but attaches to it the Standard input object inspector. When it comes to consume this value/inspector combo, the code bombs as the inspector is not actually appropriate to crack an OrcStruct

          Show
          Remus Rusanu added a comment - The VectorFileSinkOperator uses an optimized path for the VectorizedSerde which creates OrcStruct values, but attaches to it the Standard input object inspector. When it comes to consume this value/inspector combo, the code bombs as the inspector is not actually appropriate to crack an OrcStruct
          Hide
          Remus Rusanu added a comment -

          Hello Ashutosh,

          I’ve looked at this and my opinion is that the problem is with the Orc’ VectorizedSerde.serialize. Despite the fact that we’re writing an OrcStruct field, it adds to the OrcSerde object created the passed in object inspector, which is for the input struct, instead of the OrcStructInspectr which should be used with the created OrcStruct.

          I tried this patch:

          diff --git ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java
          index d765353..c4268c1 100644
          — ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java
          +++ ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java
          @@ -143,9 +143,9 @@ public SerDeStats getSerDeStats() {
          public Writable serializeVector(VectorizedRowBatch vrg, ObjectInspector objInspector)
          throws SerDeException {
          if (vos == null)

          { - vos = new VectorizedOrcSerde(objInspector); + vos = new VectorizedOrcSerde(getObjectInspector()); }
          • return vos.serialize(vrg, objInspector);
            + return vos.serialize(vrg, getObjectInspector());
            }

          However, with this fix I’m hitting other (very familiar…) cast exceptions:

          Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.serde2.io.TimestampWritable cannot be cast to java.sql.Timestamp
          at org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaTimestampObjectInspector.getPrimitiveJavaObject(JavaTimestampObjectInspector.java:39)
          at org.apache.hadoop.hive.ql.io.orc.WriterImpl$TimestampTreeWriter.write(WriterImpl.java:1172)
          at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.write(WriterImpl.java:1349)
          at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:1962)
          at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:78)
          at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.processOp(VectorFileSinkOperator.java:159)
          at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
          at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:129)
          at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
          at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:91)
          at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
          at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43)

          Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.serde2.io.ByteWritable cannot be cast to org.apache.hadoop.io.IntWritable
          at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:36)
          at org.apache.hadoop.hive.ql.io.orc.WriterImpl$IntegerTreeWriter.write(WriterImpl.java:762)
          at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.write(WriterImpl.java:1349)
          at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:1962)
          at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:78)
          at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.processOp(VectorFileSinkOperator.java:159)
          at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
          at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:129)
          at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
          at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:91)
          at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)
          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)
          at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43)

          Before I go and hack through code I’m only vaguely familiar with (the Orc serdes), do you have someone more experienced in this area at HW to have a look too?
          It seems that the Orc writer expects Java primitive types where the vector file sink creates Writables instead… I’m afraid if I ‘fix’ this one way, some other place will break.

          Thanks,
          ~Remus

          From: Ashutosh Chauhan (JIRA) jira@apache.org
          Sent: Tuesday, November 19, 2013 1:11 AM
          To: Remus Rusanu
          Subject: [jira] [Commented] (HIVE-5845) CTAS failed on vectorized code path

          https://issues.apache.org/jira/secure/useravatar?avatarId=10452

          Ashutosh Chauhan<https://issues.apache.org/jira/secure/ViewProfile.jspa?name=ashutoshc> commented on an issue

          Re: CTAS failed on vectorized code path<https://issues.apache.org/jira/browse/HIVE-5845>

          Stack-trace:

          Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.orc.OrcStruct cannot be cast to [Ljava.lang.Object;

          at org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldData(StandardStructObjectInspector.java:173)

          at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.write(WriterImpl.java:1349)

          at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:1962)

          at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:78)

          at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.processOp(VectorFileSinkOperator.java:159)

          at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)

          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)

          at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:129)

          at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)

          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)

          at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:91)

          at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489)

          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827)

          at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43)

          [Add Comment]<https://issues.apache.org/jira/browse/HIVE-5845#add-comment>

          Add Comment<https://issues.apache.org/jira/browse/HIVE-5845#add-comment>

          Hive<https://issues.apache.org/jira/browse/HIVE> / [Bug] <https://issues.apache.org/jira/browse/HIVE-5845> HIVE-5845<https://issues.apache.org/jira/browse/HIVE-5845>

          CTAS failed on vectorized code path<https://issues.apache.org/jira/browse/HIVE-5845>

          Following query fails:
          create table store_sales_2 stored as orc as select * from alltypesorc;

          This message was sent by Atlassian JIRA (v6.1#6144-sha1:2e50328)

          [Atlassian logo]

          Show
          Remus Rusanu added a comment - Hello Ashutosh, I’ve looked at this and my opinion is that the problem is with the Orc’ VectorizedSerde.serialize. Despite the fact that we’re writing an OrcStruct field, it adds to the OrcSerde object created the passed in object inspector, which is for the input struct, instead of the OrcStructInspectr which should be used with the created OrcStruct. I tried this patch: diff --git ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java index d765353..c4268c1 100644 — ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java +++ ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java @@ -143,9 +143,9 @@ public SerDeStats getSerDeStats() { public Writable serializeVector(VectorizedRowBatch vrg, ObjectInspector objInspector) throws SerDeException { if (vos == null) { - vos = new VectorizedOrcSerde(objInspector); + vos = new VectorizedOrcSerde(getObjectInspector()); } return vos.serialize(vrg, objInspector); + return vos.serialize(vrg, getObjectInspector()); } However, with this fix I’m hitting other (very familiar…) cast exceptions: Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.serde2.io.TimestampWritable cannot be cast to java.sql.Timestamp at org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaTimestampObjectInspector.getPrimitiveJavaObject(JavaTimestampObjectInspector.java:39) at org.apache.hadoop.hive.ql.io.orc.WriterImpl$TimestampTreeWriter.write(WriterImpl.java:1172) at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.write(WriterImpl.java:1349) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:1962) at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:78) at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.processOp(VectorFileSinkOperator.java:159) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:129) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:91) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43) Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.serde2.io.ByteWritable cannot be cast to org.apache.hadoop.io.IntWritable at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:36) at org.apache.hadoop.hive.ql.io.orc.WriterImpl$IntegerTreeWriter.write(WriterImpl.java:762) at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.write(WriterImpl.java:1349) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:1962) at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:78) at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.processOp(VectorFileSinkOperator.java:159) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:129) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:91) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43) Before I go and hack through code I’m only vaguely familiar with (the Orc serdes), do you have someone more experienced in this area at HW to have a look too? It seems that the Orc writer expects Java primitive types where the vector file sink creates Writables instead… I’m afraid if I ‘fix’ this one way, some other place will break. Thanks, ~Remus From: Ashutosh Chauhan (JIRA) jira@apache.org Sent: Tuesday, November 19, 2013 1:11 AM To: Remus Rusanu Subject: [jira] [Commented] ( HIVE-5845 ) CTAS failed on vectorized code path https://issues.apache.org/jira/secure/useravatar?avatarId=10452 Ashutosh Chauhan< https://issues.apache.org/jira/secure/ViewProfile.jspa?name=ashutoshc > commented on an issue Re: CTAS failed on vectorized code path< https://issues.apache.org/jira/browse/HIVE-5845 > Stack-trace: Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.orc.OrcStruct cannot be cast to [Ljava.lang.Object; at org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldData(StandardStructObjectInspector.java:173) at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.write(WriterImpl.java:1349) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:1962) at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:78) at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.processOp(VectorFileSinkOperator.java:159) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:129) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:91) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:489) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:827) at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43) [Add Comment] < https://issues.apache.org/jira/browse/HIVE-5845#add-comment > Add Comment< https://issues.apache.org/jira/browse/HIVE-5845#add-comment > Hive< https://issues.apache.org/jira/browse/HIVE > / [Bug] < https://issues.apache.org/jira/browse/HIVE-5845 > HIVE-5845 < https://issues.apache.org/jira/browse/HIVE-5845 > CTAS failed on vectorized code path< https://issues.apache.org/jira/browse/HIVE-5845 > Following query fails: create table store_sales_2 stored as orc as select * from alltypesorc; This message was sent by Atlassian JIRA (v6.1#6144-sha1:2e50328) [Atlassian logo]
          Hide
          Ashutosh Chauhan added a comment -

          Owen O'Malley is obvious expert on Orc. Lets tap into him for some advice here.

          Show
          Ashutosh Chauhan added a comment - Owen O'Malley is obvious expert on Orc. Lets tap into him for some advice here.
          Hide
          Remus Rusanu added a comment -

          The root cause is not in OrcStruct/OrcSerde, but instead is in the VectorExpressionWriterFactory which does not honor properly the object inspectors passed in, assumes always to be a WritableXXObjectInspector and creates an WritableXXX object value. I am fixing this.

          Why this was exposed is that OrcStruct.VectorExpressionWriterFactory creates writable object inspectors for most primitives, but for TIMESTAMP uses a native Java object inspector (also for DATE). I don't know why Orc does that, but none the less the VectorExpressionWriterFactory should handle this in a robust manner.

          Show
          Remus Rusanu added a comment - The root cause is not in OrcStruct/OrcSerde, but instead is in the VectorExpressionWriterFactory which does not honor properly the object inspectors passed in, assumes always to be a WritableXXObjectInspector and creates an WritableXXX object value. I am fixing this. Why this was exposed is that OrcStruct.VectorExpressionWriterFactory creates writable object inspectors for most primitives, but for TIMESTAMP uses a native Java object inspector (also for DATE). I don't know why Orc does that, but none the less the VectorExpressionWriterFactory should handle this in a robust manner.
          Hide
          Remus Rusanu added a comment -

          OrcStruct.createObjectInspector(TypeInfo info) that is (Eclipse copy/paste ...)

          Show
          Remus Rusanu added a comment - OrcStruct.createObjectInspector(TypeInfo info) that is (Eclipse copy/paste ...)
          Show
          Remus Rusanu added a comment - https://reviews.apache.org/r/15716/
          Hide
          Remus Rusanu added a comment -

          The uploaded fix addresses all the issues I found:

          • changes the OrcStruct serialization to use the correct object inspector (the one that writes into the OrcStruct fields)
          • the VectorExpressionWriterFactory was refactored to handle all assignments/writes and new object creation via the object inspectors, rather than assumming Writable types
          • a new API is available on the VectorExpressionWritter: setValue, which writes the value into the passed in object rather that returning a mutated the per-writer singleton. This was necessary for the OrcStruct vector serialization, which would end up reusing the same instance on all rows in the vector otherwise
          • changed the VectorExpressionWriter to use TypeInfo Category and PrimitiveCategory rather than type name string comparison
          • Have the VectorExpressionWriter generate writers from ObjectInspector and derive the OI from ExprNodeDesc rather than the other way around
          • extended the TestVectorExpressionWriter unit tests to cover the setValue API and struct fields assignment
          Show
          Remus Rusanu added a comment - The uploaded fix addresses all the issues I found: changes the OrcStruct serialization to use the correct object inspector (the one that writes into the OrcStruct fields) the VectorExpressionWriterFactory was refactored to handle all assignments/writes and new object creation via the object inspectors, rather than assumming Writable types a new API is available on the VectorExpressionWritter: setValue, which writes the value into the passed in object rather that returning a mutated the per-writer singleton. This was necessary for the OrcStruct vector serialization, which would end up reusing the same instance on all rows in the vector otherwise changed the VectorExpressionWriter to use TypeInfo Category and PrimitiveCategory rather than type name string comparison Have the VectorExpressionWriter generate writers from ObjectInspector and derive the OI from ExprNodeDesc rather than the other way around extended the TestVectorExpressionWriter unit tests to cover the setValue API and struct fields assignment
          Hide
          Ashutosh Chauhan added a comment -

          +1

          Show
          Ashutosh Chauhan added a comment - +1
          Hide
          Hive QA added a comment -

          Overall: +1 all checks pass

          Here are the results of testing the latest attachment:
          https://issues.apache.org/jira/secure/attachment/12614892/HIVE-5845.1.patch

          SUCCESS: +1 4679 tests passed

          Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/378/testReport
          Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/378/console

          Messages:

          Executing org.apache.hive.ptest.execution.PrepPhase
          Executing org.apache.hive.ptest.execution.ExecutionPhase
          Executing org.apache.hive.ptest.execution.ReportingPhase
          

          This message is automatically generated.

          ATTACHMENT ID: 12614892

          Show
          Hive QA added a comment - Overall : +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12614892/HIVE-5845.1.patch SUCCESS: +1 4679 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/378/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/378/console Messages: Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase This message is automatically generated. ATTACHMENT ID: 12614892
          Hide
          Ashutosh Chauhan added a comment -

          Committed to trunk. Thanks, Remus!

          Show
          Ashutosh Chauhan added a comment - Committed to trunk. Thanks, Remus!

            People

            • Assignee:
              Remus Rusanu
              Reporter:
              Ashutosh Chauhan
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development