Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-25454

Division between operands with negative scale can cause precision loss

    Details

    • Type: Bug
    • Status: In Progress
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 2.3.0, 2.3.1
    • Fix Version/s: None
    • Component/s: SQL
    • Labels:
      None

      Description

      The issue was originally reported by Bruce Robbins here: https://issues.apache.org/jira/browse/SPARK-22036?focusedCommentId=16618104&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16618104.

      The problem consist in a precision loss when the second operand of the division is a decimal with a negative scale. It was present also before 2.3 but it was harder to reproduce: you had to do something like lit(BigDecimal(100e6)), while now this can happen more frequently with SQL constants.

      The problem is that our logic is taken from Hive and SQLServer where decimals with negative scales are not allowed. We might also consider enforcing this too in 3.0 eventually. Meanwhile we can fix the logic for computing the result type for a division.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              mgaido Marco Gaido
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated: