Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
empire-db-2.4.2
Description
The code for determining a decimal/numeric columns scale from it's size is assuming that 9 is the maximum value for the scale value, when in fact the scale can be as large as the columns precision (up to 38 I believe).
Precision and scale for a decimal field is encoded in a double as "precision.scale", so a numeric(10,3) field size is encoded as the double value 10.3.
numeric(17,12) is also a valid numeric field, but because the code assumes that the scale is less than 10 this causes problem, even if it's stored as 17.12 in the columns size field.
Even worse would be numeric(17,10), which would be interpreted as having a scale of 1 rather than 10.
This manifests itself in 2 location, that I've found so far, when validating a decimal field and when generating a DDL statement.
The offending code in DBTableColumn.validateNumber(DataType type, Number n) is:
int reqScale =((int)(size*10)-(reqPrec*10))
and in DBDDLGenerator.appendColumnDataType(DataType type, double size, DBTableColumn c, StringBuilder sql) is:
int scale = (int) ((size - prec) * 10 + 0.5);
My first suggestion for a fix would be to alter the 2 pieces of code above to perform the correct conversions from double to precision/scale, using something like BigDecimal, but the problem is by then it's too late, numeric(17,12) would be fine, but it would not fix the problem for numeric(17,10) as it would already be stored as 17.1 in the column size.
I don't see any option but to either move the scale into a separate value and stop encoding it in the size, or change the column size field to be a BigDecimal (or some other value that can represent both the precision and scale correctly).
The former feels more correct to me, but the later would seem easier to implement, unfortunately both of those options are API breaking changes.
Thanks,
Shaun