Details
-
Bug
-
Status: Patch Available
-
Major
-
Resolution: Unresolved
-
2.3.5, 2.3.6
-
None
Description
When user create an external table and import a parquet-avro data with 1.8.2 version which supported logical_type in Hive2.3 or before version, Hive can not read timestamp type column data correctly.
Hive will read it as LongWritable which it actually stores as long(logical_type=timestamp-millis).So we may add some codes in org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.java to let Hive cast long type to timestamp type.
Some code like below:
public Timestamp getPrimitiveJavaObject(Object o) {
if (o instanceof LongWritable)
return o == null ? null : ((TimestampWritable) o).getTimestamp();
}