Derby
  1. Derby
  2. DERBY-5546

ResultSet#updateBigDecimal on a REAL column does not do underflow checking

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 10.2.1.6, 10.2.2.0, 10.3.1.4, 10.3.2.1, 10.3.3.0, 10.4.1.3, 10.4.2.0, 10.5.1.1, 10.5.2.0, 10.5.3.0, 10.6.1.0, 10.6.2.1, 10.7.1.1, 10.8.1.2, 10.8.2.2, 10.8.3.0, 10.9.1.0
    • Fix Version/s: 10.9.2.2, 10.10.1.1
    • Component/s: JDBC
    • Urgency:
      Normal
    • Issue & fix info:
      Repro attached
    • Bug behavior facts:
      Data corruption

      Description

      In contrast, ResultSet#updateBigDecimal on a FLOAT or DOUBLE column gives the expected error on underflow. Cf. the attached repro program MissingUnderflowCheck.

      1. releaseNote.html
        4 kB
        Dag H. Wanvik
      2. MissingUnderflowCheck.java
        2 kB
        Dag H. Wanvik
      3. derby-5546-2.stat
        0.2 kB
        Dag H. Wanvik
      4. derby-5546-2.diff
        5 kB
        Dag H. Wanvik
      5. derby-5546.stat
        0.2 kB
        Dag H. Wanvik
      6. derby-5546.diff
        6 kB
        Dag H. Wanvik

        Issue Links

          Activity

          Hide
          Dag H. Wanvik added a comment -

          In SQLReal, the method setBigDecimal has this line:

          setValue(bigDecimal.floatValue());

          The method BigDecimal.floatValue has this caveat in the Javadoc:
          " Note that even when the return value is finite, this conversion can lose information about the precision of the BigDecimal value. "

          So, if the actual value is Double.MIN_VALUE, the value here is set to 0.0 without any error.
          In contrast, when we try to update a DOUBLE column, the method used is

          setValue(bigDecimal.doubleValue())

          This will not underflow, but subsequently gets caught by the range check in NumberDataType#normalizeDouble() where the value is checked against Limits.DB2_SMALLEST_DOUBLE.

          However, if the value represented in the BigDecimal is even smaller than Double.MIN_VALUE, the call "bigDecimal.doubleValue()" would yield 0.0 and we would have the same behavior for DOUBLE, too.

          Show
          Dag H. Wanvik added a comment - In SQLReal, the method setBigDecimal has this line: setValue(bigDecimal.floatValue()); The method BigDecimal.floatValue has this caveat in the Javadoc: " Note that even when the return value is finite, this conversion can lose information about the precision of the BigDecimal value. " So, if the actual value is Double.MIN_VALUE, the value here is set to 0.0 without any error. In contrast, when we try to update a DOUBLE column, the method used is setValue(bigDecimal.doubleValue()) This will not underflow, but subsequently gets caught by the range check in NumberDataType#normalizeDouble() where the value is checked against Limits.DB2_SMALLEST_DOUBLE. However, if the value represented in the BigDecimal is even smaller than Double.MIN_VALUE, the call "bigDecimal.doubleValue()" would yield 0.0 and we would have the same behavior for DOUBLE, too.
          Hide
          Dag H. Wanvik added a comment -

          Uploading DERBY-5546 which adds underflow checking to SQLReal and SQLDouble's setBigDecimal methods.

          It also uncomments the tests in ParameterMappingTest#testDerby5533UpdateXXX that had to be disabled due to this error. Or more precisely: the tests are uncommented for the embedded case. DERBY-5534 still makes it necessary to skip them for the client driver case.

          Running regressions.

          Show
          Dag H. Wanvik added a comment - Uploading DERBY-5546 which adds underflow checking to SQLReal and SQLDouble's setBigDecimal methods. It also uncomments the tests in ParameterMappingTest#testDerby5533UpdateXXX that had to be disabled due to this error. Or more precisely: the tests are uncommented for the embedded case. DERBY-5534 still makes it necessary to skip them for the client driver case. Running regressions.
          Hide
          Knut Anders Hatlen added a comment -

          I think the suggested changes are fine. Could the underflow check be factored out to a shared helper method?

          I haven't tested, but I suspect the API comment about loss of precision in BigDecimal.floatValue() also refers to numbers like 1.00000(...)00001 being truncated to 1. Does/should updateBigDecimal() fail in that case too?

          Show
          Knut Anders Hatlen added a comment - I think the suggested changes are fine. Could the underflow check be factored out to a shared helper method? I haven't tested, but I suspect the API comment about loss of precision in BigDecimal.floatValue() also refers to numbers like 1.00000(...)00001 being truncated to 1. Does/should updateBigDecimal() fail in that case too?
          Hide
          Dag H. Wanvik added a comment -

          I agree about your interpretation of the API comment about precision loss. I took my cues from the current way this is current handled, cf. the code in SQLReal#setDouble:

          float fv = (float) theValue;

          // detect rounding taking place at cast time
          if (fv == 0.0f && theValue != 0.0d)

          { throw StandardException.newException( SQLState.LANG_OUTSIDE_RANGE_FOR_DATATYPE, TypeId.REAL_NAME); }

          This is indeed (only) a special case of precision loss. I haven't found any definitions about what should happen in the JDBC standard, so I guess it's implementation defined. With my patch the behavior is more consistent. We could also signal other forms of precision loss, like the one you show, or we could stop throwing on the above also. What do you think?

          Show
          Dag H. Wanvik added a comment - I agree about your interpretation of the API comment about precision loss. I took my cues from the current way this is current handled, cf. the code in SQLReal#setDouble: float fv = (float) theValue; // detect rounding taking place at cast time if (fv == 0.0f && theValue != 0.0d) { throw StandardException.newException( SQLState.LANG_OUTSIDE_RANGE_FOR_DATATYPE, TypeId.REAL_NAME); } This is indeed (only) a special case of precision loss. I haven't found any definitions about what should happen in the JDBC standard, so I guess it's implementation defined. With my patch the behavior is more consistent. We could also signal other forms of precision loss, like the one you show, or we could stop throwing on the above also. What do you think?
          Hide
          Dag H. Wanvik added a comment -

          Regressions passed.

          Show
          Dag H. Wanvik added a comment - Regressions passed.
          Hide
          Knut Anders Hatlen added a comment -

          I guess we could see what happens when updateDouble() is used on an INTEGER column and use that for guidance. For example, what happens when trying to update to:

          • 1.1 (does it fail or set the value to 1?
          • 0.1 - analogous to the underflow case

          And also, what happens with updateLong(1, Long.MAX_VALUE) on a REAL column? (I think it loses precision, but does it fail?)

          Show
          Knut Anders Hatlen added a comment - I guess we could see what happens when updateDouble() is used on an INTEGER column and use that for guidance. For example, what happens when trying to update to: 1.1 (does it fail or set the value to 1? 0.1 - analogous to the underflow case And also, what happens with updateLong(1, Long.MAX_VALUE) on a REAL column? (I think it loses precision, but does it fail?)
          Hide
          Dag H. Wanvik added a comment -

          Embedded uses Math.floor on the double/float value supplied to updateDouble, no error is given in either 1.1 or 0.1 case.
          In the case of updateLong(1, Long.MAX_VALUE) on a REAL column, Derby also loses precision with no error. Embedded just
          assigns the long directly to a float value. So it seems we mostly ignore precision loss.

          On the other hand, for SQL REAL/DOUBLE "*" and "/", we do give error under underflow to zero, e.g in SQLDouble#times, so the picture is mixed..

          Show
          Dag H. Wanvik added a comment - Embedded uses Math.floor on the double/float value supplied to updateDouble, no error is given in either 1.1 or 0.1 case. In the case of updateLong(1, Long.MAX_VALUE) on a REAL column, Derby also loses precision with no error. Embedded just assigns the long directly to a float value. So it seems we mostly ignore precision loss. On the other hand, for SQL REAL/DOUBLE "*" and "/", we do give error under underflow to zero, e.g in SQLDouble#times, so the picture is mixed..
          Hide
          Knut Anders Hatlen added a comment -

          Thanks for running these experiments, Dag. It's an interesting find that embedded uses Math.floor() for the conversion in updateDouble(). That means -0.1 gets converted to -1 on embedded, whereas it's converted to 0 on the client. The conversion on the client sounds more intuitive to me in this case.

          When using PreparedStatement.setDouble() on an INT column, both drivers agree that -0.1 should be converted to -1 (probably because the client sends it as a floating point value across the wire and lets the server do the conversion).

          However, when invoking ResultSet.getInt() on a REAL column, both drivers convert -0.1 to 0. And CAST(-0.1 AS INT) also evaluates to 0.

          Show
          Knut Anders Hatlen added a comment - Thanks for running these experiments, Dag. It's an interesting find that embedded uses Math.floor() for the conversion in updateDouble(). That means -0.1 gets converted to -1 on embedded, whereas it's converted to 0 on the client. The conversion on the client sounds more intuitive to me in this case. When using PreparedStatement.setDouble() on an INT column, both drivers agree that -0.1 should be converted to -1 (probably because the client sends it as a floating point value across the wire and lets the server do the conversion). However, when invoking ResultSet.getInt() on a REAL column, both drivers convert -0.1 to 0. And CAST(-0.1 AS INT) also evaluates to 0.
          Hide
          Dag H. Wanvik added a comment -

          Yes, I would agree that the converting -0.1 to -1 looks weird.

          Show
          Dag H. Wanvik added a comment - Yes, I would agree that the converting -0.1 to -1 looks weird.
          Hide
          Dag H. Wanvik added a comment -

          I have asked the JDBC expert group advice on this one. I'll put this issue on hold until I hear back, unmarking "patch available".

          Show
          Dag H. Wanvik added a comment - I have asked the JDBC expert group advice on this one. I'll put this issue on hold until I hear back, unmarking "patch available".
          Hide
          Dag H. Wanvik added a comment -

          Talked to Lance. We should follow the advice in the JDBC specification on this [1], when writing to a data source, throw DataTruncation if information is lost/truncated. On reading from a data source, report (not throw) DataTruncation (a SQLWarning subclass). In this case, since we are updating a column, we should throw.

          [1] JDBC 4.1 specification section 8.3 DataTruncation http://download.oracle.com/otndocs/jcp/jdbc-4_1-mrel-spec/index.html

          Show
          Dag H. Wanvik added a comment - Talked to Lance. We should follow the advice in the JDBC specification on this [1] , when writing to a data source, throw DataTruncation if information is lost/truncated. On reading from a data source, report (not throw) DataTruncation (a SQLWarning subclass). In this case, since we are updating a column, we should throw. [1] JDBC 4.1 specification section 8.3 DataTruncation http://download.oracle.com/otndocs/jcp/jdbc-4_1-mrel-spec/index.html
          Hide
          Dag H. Wanvik added a comment -

          Hmm, no I am no longer so sure that DataTruncation is applicable here: Cf. the below analysis.

          SQL 2011 4.4.2 Characteristics of numbers

          "A number is assignable only to sites of numeric type. If an assignment of some number would result in a loss of its most significant digit, an exception condition is raised. If least significant digits are lost, implementation defined rounding or truncating occurs, with no exception condition being raised."

          The SQL state DATA_EXCEPTION_STRING_DATA_RIGHT_TRUNCATION (22001) would seem not to fit.
          The SQL state WARNING_STRING_DATA_RIGHT_TRUNCATION_WARNING (01004) would seem not to fit.

          The SQL state DATA_EXCEPTION_ERROR_IN_ASSIGNMENT (22005) and others
          e.g. DATA_EXCEPTION_NUMERIC_VALUE_OUT_OF_RANGE (22003) would seem to
          fit.

          JDBC specification 4.1 (quote)

          "The DataTruncation class,
          ------------------------
          a sub-class of SQLWarning, provides information when data is truncated. When data truncation occurs on a write to the data source, a DataTruncation object is thrown. The data value that has been truncated may have been written to the data source even if a warning has been generated. When data truncation occurs on a read from the data source, a SQLWarning is reported. A DataTruncation object contains the following information:

          :
          ■ the SQLState "01004" when data truncation occurs on a read from the data source
          ■ the SQLState "22001" when data truncation occurs on a write to the data source
          :

          The SQLDataException class
          --------------------------
          Table 8-1 specifies which NonTransientSQLException subclass must be thrown for a a given SQLState class value:

          --------------------------------------

          22___ SQLDataException !
          --------------------------------------
          "

          [Aside: this would seem to conflict the instructions earlier under DataTruncation to throw the class for 22001, but let's assume the instruction to use DataTruncation overrides this one... ]

          Also: SQLDataException can be thrown for vendor specific reasons.

          Note that both DataTruncation and SQLDataException are tied to SQL state 22___. The former is a SQLWarning, the latter is a full SQLException.

          So, agaian, for what subcodes of 22___ should one use DataTruncation or SQLDataException? The SQL standard itself specifies many 22___ states, all of which have the name prefix "DATA_EXCEPTION".

          For 22001, which JDBC ties to DataTruncation, the SQL name is
          DATA_EXCEPTION_STRING_DATA_RIGHT_TRUNCATION.
          For 01004 which JDBC ties to DataTruncation, the SQL name is
          WARNING_STRING_DATA_RIGHT_TRUNCATION_WARNING.

          So this all leads me to wonder whether DataTruncation should only be used for string types (thrown or reported as the case may be), and numeric types should throw SQLDataException (if most significant digits can't be preserved), or be silent if rounding/truncation of least significant digits occur.

          Show
          Dag H. Wanvik added a comment - Hmm, no I am no longer so sure that DataTruncation is applicable here: Cf. the below analysis. SQL 2011 4.4.2 Characteristics of numbers "A number is assignable only to sites of numeric type. If an assignment of some number would result in a loss of its most significant digit, an exception condition is raised. If least significant digits are lost, implementation defined rounding or truncating occurs, with no exception condition being raised." The SQL state DATA_EXCEPTION_STRING_DATA_RIGHT_TRUNCATION (22001) would seem not to fit. The SQL state WARNING_STRING_DATA_RIGHT_TRUNCATION_WARNING (01004) would seem not to fit. The SQL state DATA_EXCEPTION_ERROR_IN_ASSIGNMENT (22005) and others e.g. DATA_EXCEPTION_NUMERIC_VALUE_OUT_OF_RANGE (22003) would seem to fit. JDBC specification 4.1 (quote) "The DataTruncation class, ------------------------ a sub-class of SQLWarning, provides information when data is truncated. When data truncation occurs on a write to the data source, a DataTruncation object is thrown. The data value that has been truncated may have been written to the data source even if a warning has been generated. When data truncation occurs on a read from the data source, a SQLWarning is reported. A DataTruncation object contains the following information: : ■ the SQLState "01004" when data truncation occurs on a read from the data source ■ the SQLState "22001" when data truncation occurs on a write to the data source : The SQLDataException class -------------------------- Table 8-1 specifies which NonTransientSQLException subclass must be thrown for a a given SQLState class value: -------------------------------------- 22___ SQLDataException ! -------------------------------------- " [Aside: this would seem to conflict the instructions earlier under DataTruncation to throw the class for 22001, but let's assume the instruction to use DataTruncation overrides this one... ] Also: SQLDataException can be thrown for vendor specific reasons. Note that both DataTruncation and SQLDataException are tied to SQL state 22___. The former is a SQLWarning, the latter is a full SQLException. So, agaian, for what subcodes of 22___ should one use DataTruncation or SQLDataException? The SQL standard itself specifies many 22___ states, all of which have the name prefix "DATA_EXCEPTION". For 22001, which JDBC ties to DataTruncation, the SQL name is DATA_EXCEPTION_STRING_DATA_RIGHT_TRUNCATION. For 01004 which JDBC ties to DataTruncation, the SQL name is WARNING_STRING_DATA_RIGHT_TRUNCATION_WARNING. So this all leads me to wonder whether DataTruncation should only be used for string types (thrown or reported as the case may be), and numeric types should throw SQLDataException (if most significant digits can't be preserved), or be silent if rounding/truncation of least significant digits occur.
          Hide
          Dag H. Wanvik added a comment -

          Refreshed patch uploaded (#2). No significant changes, just works with latest trunk.

          Show
          Dag H. Wanvik added a comment - Refreshed patch uploaded (#2). No significant changes, just works with latest trunk.
          Hide
          Dag H. Wanvik added a comment - - edited

          I'm satisfied that using SQLDataException is the right thing to do here. Unless I hear dissenting opinion, I will commit this fix shortly.

          Show
          Dag H. Wanvik added a comment - - edited I'm satisfied that using SQLDataException is the right thing to do here. Unless I hear dissenting opinion, I will commit this fix shortly.
          Hide
          Knut Anders Hatlen added a comment -

          I'm fine with the proposed changes. Maybe we should add a release note to give the users a heads up about this change?

          Show
          Knut Anders Hatlen added a comment - I'm fine with the proposed changes. Maybe we should add a release note to give the users a heads up about this change?
          Hide
          Dag H. Wanvik added a comment -

          Yes, I think we should, although it's a corner case it could trip some users seeing this as a feature, not a bug, if they are even aware of it.

          Show
          Dag H. Wanvik added a comment - Yes, I think we should, although it's a corner case it could trip some users seeing this as a feature, not a bug, if they are even aware of it.
          Hide
          Dag H. Wanvik added a comment -

          Added a release note to this issue.

          Show
          Dag H. Wanvik added a comment - Added a release note to this issue.
          Hide
          Knut Anders Hatlen added a comment -

          Patch and release note look good to me. +1

          Show
          Knut Anders Hatlen added a comment - Patch and release note look good to me. +1
          Hide
          Dag H. Wanvik added a comment -

          Committed patch as svn 1447996, resolving.

          Show
          Dag H. Wanvik added a comment - Committed patch as svn 1447996, resolving.
          Hide
          Dag H. Wanvik added a comment -

          This patch could be backported if desired, but closing now. Please reopen to backport.

          Show
          Dag H. Wanvik added a comment - This patch could be backported if desired, but closing now. Please reopen to backport.
          Hide
          Mamta A. Satoor added a comment -

          I will work on backporting this issue

          Show
          Mamta A. Satoor added a comment - I will work on backporting this issue
          Hide
          ASF subversion and git services added a comment -

          Commit 1511185 from Mamta A. Satoor in branch 'code/branches/10.9'
          [ https://svn.apache.org/r1511185 ]

          DERBY-5546(ResultSet#updateBigDecimal on a REAL column does not do underflow checking)

          Backporting to 10.9

          Fix provided by Dag Wanvik

          Show
          ASF subversion and git services added a comment - Commit 1511185 from Mamta A. Satoor in branch 'code/branches/10.9' [ https://svn.apache.org/r1511185 ] DERBY-5546 (ResultSet#updateBigDecimal on a REAL column does not do underflow checking) Backporting to 10.9 Fix provided by Dag Wanvik
          Hide
          Mamta A. Satoor added a comment -

          The fix for this jira uses BigDecimal.ZERO which was introduced in jdk 1.5 and hence it doesn't compile on 10.8 which used jdk 1.4. I will mark the jira as 10.8 backport reject.

          Show
          Mamta A. Satoor added a comment - The fix for this jira uses BigDecimal.ZERO which was introduced in jdk 1.5 and hence it doesn't compile on 10.8 which used jdk 1.4. I will mark the jira as 10.8 backport reject.

            People

            • Assignee:
              Dag H. Wanvik
              Reporter:
              Dag H. Wanvik
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development