Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-19085

FastHiveDecimal abs(0) sets sign to +ve

    XMLWordPrintableJSON

Details

    Description

      Hi,

      We use parquet's table to store the result of others query. Some query use the function abs. If the function "abs" take 0 (type decimal)  in input, then the insert in the parquet's table failed 

       

      Scenario:

      create table test (col1 decimal(10,2)) stored as parquet;

      insert into test values(0);

      insert into test select abs(col1) from test;

       

      Result;

      The insert query crash with the error:

       

      2018-03-30 17:39:02,123 FATAL [IPC Server handler 2 on 35885] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1522311557218_0002_m_000000_0 - exited : java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"col1":0}
      at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:169)
      at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
      at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
      at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
      at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:422)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
      at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:169)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"col1":0}
      at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:562)
      at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:160)
      ... 8 more
      Caused by: java.lang.RuntimeException: Unexpected #3
      at org.apache.hadoop.hive.common.type.FastHiveDecimalImpl.fastBigIntegerBytesUnscaled(FastHiveDecimalImpl.java:2550)
      at org.apache.hadoop.hive.common.type.FastHiveDecimalImpl.fastBigIntegerBytesScaled(FastHiveDecimalImpl.java:2806)
      at org.apache.hadoop.hive.common.type.FastHiveDecimal.fastBigIntegerBytesScaled(FastHiveDecimal.java:295)
      at org.apache.hadoop.hive.common.type.HiveDecimal.bigIntegerBytesScaled(HiveDecimal.java:712)
      at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$DecimalDataWriter.decimalToBinary(DataWritableWriter.java:521)
      at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$DecimalDataWriter.write(DataWritableWriter.java:514)
      at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$GroupDataWriter.write(DataWritableWriter.java:204)
      at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$MessageDataWriter.write(DataWritableWriter.java:220)
      at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:91)
      at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
      at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
      at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
      at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
      at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
      at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:112)
      at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:125)
      at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:762)
      at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
      at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
      at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
      at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
      at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:148)
      at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:547)

       

      Problem is probably due to the fastABS method. This method force "fastSignum" to 1 even when the decimal is 0 (in this case  "fastSignum" must be egal  at 0).

       

      Have a good day

      Attachments

        1. HIVE-19085.1.patch
          0.5 kB
          Gopal Vijayaraghavan

        Issue Links

          Activity

            People

              gopalv Gopal Vijayaraghavan
              SDAT/SACT/IET ACOSS
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: