Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
Description
Currently decimal schema provides incorrect resolving checks when working with decimal values, transformed from BigInt bytes(signed big endian form).
Currently algorithm of resolving checks for decimal validates that their max precision is less then the precision stated in the schema. It calculates "max" precision by applying formulae, stated in the avro specs(see section for Decimals). But it should not do this in such a case - cuz the max precision of input decimal value is, well, amount of digits in that input value.
To see what am I talking about, try running a test in the supplied PR.