Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-26437

Decimal data becomes bigint to query, unable to query

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.6.3, 2.0.2, 2.1.3, 2.2.2, 2.3.1
    • 3.0.0
    • SQL
    • None

    Description

      this is my sql:

      create table tmp.tmp_test_6387_1224_spark  stored  as ORCFile  as select 0.00 as a

      select a from tmp.tmp_test_6387_1224_spark

      CREATE TABLE `tmp.tmp_test_6387_1224_spark`(

        `a` decimal(2,2))

      ROW FORMAT SERDE

        'org.apache.hadoop.hive.ql.io.orc.OrcSerde'

      STORED AS INPUTFORMAT

        'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'

      OUTPUTFORMAT

        'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'

      When I query this table(use hive or sparksql,the exception is same), I throw the following exception information

      Caused by: java.io.EOFException: Reading BigInteger past EOF from compressed stream Stream for column 1 kind DATA position: 0 length: 0 range: 0 offset: 0 limit: 0

              at org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readBigInteger(SerializationUtils.java:176)

              at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$DecimalTreeReader.next(TreeReaderFactory.java:1264)

              at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.next(TreeReaderFactory.java:2004)

              at org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:1039)

       

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              zengxl zengxl
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: