Hive
  1. Hive
  2. HIVE-7787

Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError

    Details

    • Type: Bug Bug
    • Status: Reopened
    • Priority: Minor Minor
    • Resolution: Unresolved
    • Affects Version/s: 0.12.0, 0.13.0, 0.12.1, 0.14.0, 0.13.1
    • Fix Version/s: None
    • Component/s: Database/Schema, Thrift API
    • Labels:
      None
    • Environment:

      Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0

    • Tags:
      Parquet

      Description

      When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow):

       java.lang.NoSuchFieldError: DECIMAL.
      

      Example Thrift Schema:

      enum MyEnumType {
          EnumOne,
          EnumTwo,
          EnumThree
      }
      
      struct MyStruct {
          1: optional MyEnumType myEnumType;
          2: optional string field2;
          3: optional string field3;
      }
      
      struct outerStruct {
          1: optional list<MyStruct> myStructs
      }
      

      Hive Table:

      CREATE EXTERNAL TABLE mytable (
        mystructs array<struct<myenumtype: string, field2: string, field3: string>>
      )
      ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe'
      STORED AS
      INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat'
      OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat'
      ; 
      

      Error Stack trace:

      Java stack trace for Hive 0.12:
      Caused by: java.lang.NoSuchFieldError: DECIMAL
      	at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
      	at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
      	at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45)
      	at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
      	at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
      	at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47)
      	at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
      	at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
      	at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
      	at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
      	at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
      	at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
      	at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
      	at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
      	at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
      	at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
      	at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
      	at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
      	... 16 more
      

        Issue Links

          Activity

          Raymond Lau created issue -
          Raymond Lau made changes -
          Field Original Value New Value
          Description When reading Parquet file, where the original Thrift schema contains an enum, this causes a

           java.lang.NoSuchFieldError: DECIMAL
          Raymond Lau made changes -
          Description When reading Parquet file, where the original Thrift schema contains an enum, this causes a

           java.lang.NoSuchFieldError: DECIMAL
          When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error:

           java.lang.NoSuchFieldError: DECIMAL

          Java stack trace for Hive 0.12:
          Caused by: java.lang.NoSuchFieldError: DECIMAL
          at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
          at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
          at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
          at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
          at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
          at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
          at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
          at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
          ... 16 more
          Raymond Lau made changes -
          Summary Reading Parquet file with enum in Thrift Encoding Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
          Raymond Lau made changes -
          Description When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error:

           java.lang.NoSuchFieldError: DECIMAL

          Java stack trace for Hive 0.12:
          Caused by: java.lang.NoSuchFieldError: DECIMAL
          at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
          at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
          at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
          at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
          at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
          at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
          at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
          at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
          ... 16 more
          When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error:
          {code}
           java.lang.NoSuchFieldError: DECIMAL.
          {code}

          Full Stack trace below:

          Example Thrift Schema:
          {code}
          enum MyEnumType {
              EnumOne,
              EnumTwo,
              EnumThree
          }

          struct MyStruct {
              1: optional MyEnumType myEnumType;
              2: optional string field2;
              3: optional string field3;
          }
          {code}

          Error Stack trace:
          {code}
          Java stack trace for Hive 0.12:
          Caused by: java.lang.NoSuchFieldError: DECIMAL
          at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
          at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
          at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
          at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
          at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
          at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
          at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
          at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
          ... 16 more
          {code}
          Raymond Lau made changes -
          Description When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error:
          {code}
           java.lang.NoSuchFieldError: DECIMAL.
          {code}

          Full Stack trace below:

          Example Thrift Schema:
          {code}
          enum MyEnumType {
              EnumOne,
              EnumTwo,
              EnumThree
          }

          struct MyStruct {
              1: optional MyEnumType myEnumType;
              2: optional string field2;
              3: optional string field3;
          }
          {code}

          Error Stack trace:
          {code}
          Java stack trace for Hive 0.12:
          Caused by: java.lang.NoSuchFieldError: DECIMAL
          at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
          at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
          at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
          at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
          at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
          at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
          at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
          at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
          ... 16 more
          {code}
          When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error (full stack trace blow):
          {code}
           java.lang.NoSuchFieldError: DECIMAL.
          {code}

          Example Thrift Schema:
          {code}
          enum MyEnumType {
              EnumOne,
              EnumTwo,
              EnumThree
          }

          struct MyStruct {
              1: optional MyEnumType myEnumType;
              2: optional string field2;
              3: optional string field3;
          }

          struct outerStruct {
              1: optional list<MyStruct> myStructs
          }
          {code}

          Error Stack trace:
          {code}
          Java stack trace for Hive 0.12:
          Caused by: java.lang.NoSuchFieldError: DECIMAL
          at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
          at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
          at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
          at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
          at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
          at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
          at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
          at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
          ... 16 more
          {code}
          Raymond Lau made changes -
          Environment Hive 0.12 CDH 5.1.0 Hive 0.12 CDH 5.1.0, Hadoop 0.23
          Raymond Lau made changes -
          Affects Version/s 0.13.0 [ 12324986 ]
          Affects Version/s 0.14.0 [ 12326450 ]
          Raymond Lau made changes -
          Affects Version/s 0.12.1 [ 12325279 ]
          Raymond Lau made changes -
          Description When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error (full stack trace blow):
          {code}
           java.lang.NoSuchFieldError: DECIMAL.
          {code}

          Example Thrift Schema:
          {code}
          enum MyEnumType {
              EnumOne,
              EnumTwo,
              EnumThree
          }

          struct MyStruct {
              1: optional MyEnumType myEnumType;
              2: optional string field2;
              3: optional string field3;
          }

          struct outerStruct {
              1: optional list<MyStruct> myStructs
          }
          {code}

          Error Stack trace:
          {code}
          Java stack trace for Hive 0.12:
          Caused by: java.lang.NoSuchFieldError: DECIMAL
          at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
          at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
          at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
          at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
          at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
          at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
          at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
          at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
          ... 16 more
          {code}
          When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow):
          {code}
           java.lang.NoSuchFieldError: DECIMAL.
          {code}

          Example Thrift Schema:
          {code}
          enum MyEnumType {
              EnumOne,
              EnumTwo,
              EnumThree
          }

          struct MyStruct {
              1: optional MyEnumType myEnumType;
              2: optional string field2;
              3: optional string field3;
          }

          struct outerStruct {
              1: optional list<MyStruct> myStructs
          }
          {code}

          Error Stack trace:
          {code}
          Java stack trace for Hive 0.12:
          Caused by: java.lang.NoSuchFieldError: DECIMAL
          at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
          at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
          at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
          at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
          at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
          at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
          at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
          at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
          ... 16 more
          {code}
          Raymond Lau made changes -
          Environment Hive 0.12 CDH 5.1.0, Hadoop 0.23 Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0
          Raymond Lau made changes -
          Description When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow):
          {code}
           java.lang.NoSuchFieldError: DECIMAL.
          {code}

          Example Thrift Schema:
          {code}
          enum MyEnumType {
              EnumOne,
              EnumTwo,
              EnumThree
          }

          struct MyStruct {
              1: optional MyEnumType myEnumType;
              2: optional string field2;
              3: optional string field3;
          }

          struct outerStruct {
              1: optional list<MyStruct> myStructs
          }
          {code}

          Error Stack trace:
          {code}
          Java stack trace for Hive 0.12:
          Caused by: java.lang.NoSuchFieldError: DECIMAL
          at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
          at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
          at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
          at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
          at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
          at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
          at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
          at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
          ... 16 more
          {code}
          When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow):
          {code}
           java.lang.NoSuchFieldError: DECIMAL.
          {code}

          Example Thrift Schema:
          {code}
          enum MyEnumType {
              EnumOne,
              EnumTwo,
              EnumThree
          }

          struct MyStruct {
              1: optional MyEnumType myEnumType;
              2: optional string field2;
              3: optional string field3;
          }

          struct outerStruct {
              1: optional list<MyStruct> myStructs
          }
          {code}

          Hive Table:
          {code}
          CREATE EXTERNAL TABLE mytable (
            mystructs array<struct<myenumtype: string, field2: string, field3: string>>
          )
          ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe'
          STORED AS
          INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat'
          OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat'
          ;
          {code}

          Error Stack trace:
          {code}
          Java stack trace for Hive 0.12:
          Caused by: java.lang.NoSuchFieldError: DECIMAL
          at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
          at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47)
          at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
          at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
          at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
          at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
          at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
          at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
          at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
          at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
          at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
          ... 16 more
          {code}
          Raymond Lau made changes -
          Affects Version/s 0.13.1 [ 12326829 ]
          Affects Version/s 0.13.0 [ 12324986 ]
          Affects Version/s 0.14.0 [ 12326450 ]
          Svend Vanderveken made changes -
          Link This issue is related to HIVE-6367 [ HIVE-6367 ]
          Arup Malakar made changes -
          Attachment HIVE-7787.trunk.1.patch [ 12672749 ]
          Arup Malakar made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Assignee Arup Malakar [ amalakar ]
          Fix Version/s 0.14.0 [ 12326450 ]
          Gunther Hagleitner made changes -
          Fix Version/s 0.14.0 [ 12326450 ]
          Ryan Blue made changes -
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Resolution Not a Problem [ 8 ]
          Arup Malakar made changes -
          Resolution Not a Problem [ 8 ]
          Status Resolved [ 5 ] Reopened [ 4 ]

            People

            • Assignee:
              Arup Malakar
              Reporter:
              Raymond Lau
            • Votes:
              2 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:

                Development