Details
-
Improvement
-
Status: Resolved
-
Normal
-
Resolution: Fixed
-
Semantic
-
Low Hanging Fruit
-
All
-
None
-
Description
The decimal operators hard-code a 128 bit precision for their computations. Probably a precision needs to be configured or decided somehow, but it’s not clear why 128bit was chosen. Particularly for multiplication and addition, it’s very unclear why we truncate, which is different to our behaviour for e.g. sum() aggregates. Probably for division we should also ensure that we do not reduce the precision of the two operands. A minimum of decimal128 seems reasonable, but a maximum does not.
Attachments
Issue Links
- links to