Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-7967

Incorrect decimal size in V2 for a numeric const cast to BIGINT

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • Impala 3.1.0
    • None
    • Frontend
    • None
    • ghx-label-3

    Description

      Decimal version 2 introduces revised rules for computing decimal width. For example:

      CAST(1 AS DECIMAL(10, 0)) + CAST(2 AS DECIMAL(19,0)) --> DECIMAL(20,0)
      

      The FE uses rules to convert from one type to another. The rule to convert from BIGINT to DECIMAL is:

      BIGINT --> DECIMAL(19,0)
      

      Put these two together:

      CAST(1 AS DECIMAL(10, 0)) + CAST(2 AS BIGINT)
      

      The result should be DECIMAL(20,0). But, because of a bug in the way constant folding works, the result is actually DECIMAL(11,0) as seen in AnalyzeExprsTest.TestDecimalArithmetic():

          testDecimalExpr(decimal_10_0 + " + cast(1 as bigint)",
              ScalarType.createDecimalType(11, 0));
      

      It seems one reason the bug was not caught is that the unit tests only check for constants, not for columns. Modify the tests to work against functional.alltypes (they currently work without a table), and substitute bigint_col for CAST(2 AS BIGINT).

      Expected that the code should have followed the rules described above whether the values are a column, or a constant explicitly cast to the same type as the column. (Constants without out a cast should follow rules for their "natural type" which is what appears to be incorrectly happening in the test case above.)

      Attachments

        Activity

          People

            Unassigned Unassigned
            Paul.Rogers Paul Rogers
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: