Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
None
-
None
Description
Issue that started this JIRA:
create external table varchar_decimal (c1 varchar(25));
alter table varchar_decimal change c1 c1 decimal(31,0);
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following columns have types incompatible with the existing columns in their respective positions :
c1
There appear to be 2 issues here:
1) When hive.metastore.disallow.incompatible.col.type.changes is true (the default) we only allow StringFamily (STRING, CHAR, VARCHAR) conversion to a number that can hold the largest numbers. The theory being we don't want data loss you would get by converting the StringFamily field into integers, etc. In Hive version 2 the hierarchy of numbers had DECIMAL at the top. At some point during Hive version 2 we realized this was incorrect and put DOUBLE the top.
However, the Hive2 Hive version 2 TypeInfoUtils.implicitConversion method allows StringFamily to either DOUBLE or DECIMAL conversion.
The new org.apache.hadoop.hive.metastore.ColumnType class under Hive version 3 hive-standalone-metadata-server method checkColTypeChangeCompatible only allows DOUBLE.
This JIRA fixes that problem.
2) Also, the checkColTypeChangeCompatible method lost a version 2 series bug fix that drops CHAR/VARCHAR (and DECIMAL I think) type decorations when checking for Schema Evolution compatibility. So, when that code is checking if a data type "varchar(25)" is StringFamily it fails because the "(25)" didn't get removed properly.
This JIRA fixes issue #2 also.
NOTE: Hive1 version 2 did undecoratedTypeName(oldType) and Hive2 version performed the logic in TypeInfoUtils.implicitConvertible on the PrimitiveCategory not the raw type string.