Derby
  1. Derby
  2. DERBY-3398

Support min/max values for Java types float/double in real/double columns

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 10.4.1.3
    • Fix Version/s: 10.10.1.1
    • Component/s: SQL
    • Urgency:
      Normal
    • Issue & fix info:
      Release Note Needed

      Description

      Derby doesn't support the entire range of valid (finite) values of the Java primitive types double/float in columns with SQL type DOUBLE or REAL. This appears to be a limitation that was introduced for compatibility with DB2. There have been some requests on derby-user that we lift this restriction.

      The restriction is enforced by the methods normalizeREAL() and normalizeDOUBLE() in org.apache.derby.iapi.types.NumberDataType.

      1. derby-3398.diff
        13 kB
        Dag H. Wanvik
      2. derby-3398.status
        0.7 kB
        Dag H. Wanvik
      3. derby-3398-2.diff
        37 kB
        Dag H. Wanvik
      4. derby-3398-3.diff
        42 kB
        Dag H. Wanvik
      5. derby-3398-3.stat
        0.9 kB
        Dag H. Wanvik
      6. derby-3398-4.diff
        67 kB
        Dag H. Wanvik
      7. derby-3398-5.diff
        47 kB
        Dag H. Wanvik
      8. derby-3398-5.stat
        0.9 kB
        Dag H. Wanvik
      9. derby-3398-7.diff
        415 kB
        Dag H. Wanvik
      10. derby-3398-7.stat
        4 kB
        Dag H. Wanvik
      11. derby-3398-8.diff
        415 kB
        Dag H. Wanvik
      12. releaseNote.html
        5 kB
        Rick Hillegas
      13. releaseNote.html
        5 kB
        Dag H. Wanvik
      14. releaseNote.html
        5 kB
        Dag H. Wanvik
      15. releaseNote.html
        5 kB
        Dag H. Wanvik

        Issue Links

          Activity

          Hide
          Rick Hillegas added a comment -

          Corrected some markup problems with the release note which caused it to fail the checks applied by org.apache.derbyBuild.ReleaseNoteReader.

          Show
          Rick Hillegas added a comment - Corrected some markup problems with the release note which caused it to fail the checks applied by org.apache.derbyBuild.ReleaseNoteReader.
          Hide
          Dag H. Wanvik added a comment -

          Thanks for fixing the j9 canons, Myrna!

          Show
          Dag H. Wanvik added a comment - Thanks for fixing the j9 canons, Myrna!
          Hide
          Dag H. Wanvik added a comment -

          Attached a new version of the release notes, which mentions the deferred checking in soft upgrade mode, as requested by Kathey.

          Show
          Dag H. Wanvik added a comment - Attached a new version of the release notes, which mentions the deferred checking in soft upgrade mode, as requested by Kathey.
          Hide
          Myrna van Lunteren added a comment -

          I updated the j9 canon master/j9_foundation/iepnegativetests_ES.out with revision 1448865 (http://svn.apache.org/r1448865) to match the changes to the master of revision 1448002.

          Show
          Myrna van Lunteren added a comment - I updated the j9 canon master/j9_foundation/iepnegativetests_ES.out with revision 1448865 ( http://svn.apache.org/r1448865 ) to match the changes to the master of revision 1448002.
          Hide
          Kathey Marsden added a comment -

          Three was a discussion earlier in this issue regarding whether it is ok to defer limit checking in soft upgrade mode until execution instead of the setting of the parameters. I think that is ok but should be mentioned in the release note.

          Show
          Kathey Marsden added a comment - Three was a discussion earlier in this issue regarding whether it is ok to defer limit checking in soft upgrade mode until execution instead of the setting of the parameters. I think that is ok but should be mentioned in the release note.
          Hide
          Dag H. Wanvik added a comment -

          Thanks for looking at this, Kim, Knut.
          The upgrade tests broke due to a missing comma error introduced when I changed Float.MIN_NORMAL to a verbatim constant (famous last minute fixes..), committed fix as svn r1448532.

          Show
          Dag H. Wanvik added a comment - Thanks for looking at this, Kim, Knut. The upgrade tests broke due to a missing comma error introduced when I changed Float.MIN_NORMAL to a verbatim constant (famous last minute fixes..), committed fix as svn r1448532.
          Hide
          Knut Anders Hatlen added a comment -

          There's also a javadoc warning in the useDB2Limits() method:

          [javadoc] /code/derby/trunk/java/engine/org/apache/derby/iapi/types/NumberDataType.java:598: warning - @returns is an unknown tag.

          Show
          Knut Anders Hatlen added a comment - There's also a javadoc warning in the useDB2Limits() method: [javadoc] /code/derby/trunk/java/engine/org/apache/derby/iapi/types/NumberDataType.java:598: warning - @returns is an unknown tag.
          Hide
          Kim Haase added a comment -

          Thanks for the info, Knut. I will file a doc issue and link it to this one.

          Show
          Kim Haase added a comment - Thanks for the info, Knut. I will file a doc issue and link it to this one.
          Hide
          Knut Anders Hatlen added a comment -

          The upgrade test case for this issue seems to be failing in the continuous testing: http://download.java.net/javadesktop/derby/javadb-5572718-report/javadb-5572718-3596306-details.html

          junit.framework.AssertionFailedError
          at org.apache.derbyTesting.functionTests.tests.upgradeTests.Changes10_10.assertSetError(Changes10_10.java:403)
          at org.apache.derbyTesting.functionTests.tests.upgradeTests.Changes10_10.verifyDB2Behavior(Changes10_10.java:508)
          at org.apache.derbyTesting.functionTests.tests.upgradeTests.Changes10_10.testFloatLimits(Changes10_10.java:353)
          at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:117)
          at org.apache.derbyTesting.junit.BaseJDBCTestCase.runBareOverridable(BaseJDBCTestCase.java:424)
          at org.apache.derbyTesting.junit.BaseJDBCTestCase.runBare(BaseJDBCTestCase.java:441)
          (...)

          Show
          Knut Anders Hatlen added a comment - The upgrade test case for this issue seems to be failing in the continuous testing: http://download.java.net/javadesktop/derby/javadb-5572718-report/javadb-5572718-3596306-details.html junit.framework.AssertionFailedError at org.apache.derbyTesting.functionTests.tests.upgradeTests.Changes10_10.assertSetError(Changes10_10.java:403) at org.apache.derbyTesting.functionTests.tests.upgradeTests.Changes10_10.verifyDB2Behavior(Changes10_10.java:508) at org.apache.derbyTesting.functionTests.tests.upgradeTests.Changes10_10.testFloatLimits(Changes10_10.java:353) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:117) at org.apache.derbyTesting.junit.BaseJDBCTestCase.runBareOverridable(BaseJDBCTestCase.java:424) at org.apache.derbyTesting.junit.BaseJDBCTestCase.runBare(BaseJDBCTestCase.java:441) (...)
          Hide
          Knut Anders Hatlen added a comment -

          Hi Kim,

          Yes, those limits need to be updated. New values are:

          Smallest DOUBLE: 4.9e-324 (aka Double.MIN_VALUE)
          Largest DOUBLE: 1.7976931348623157e+308 (aka Double.MAX_VALUE)
          Smallest positive DOUBLE: 2.2250738585072014E-308 (aka Double.MIN_NORMAL)
          Largest negative DOUBLE: -2.2250738585072014E-308
          Smallest REAL: 1.4e-45 (aka Float.MIN_VALUE)
          Largest REAL: 3.4028235e+38 (aka Float.MAX_VALUE)
          Smallest positive REAL: 1.17549435E-38 (aka Float.MIN_NORMAL)
          Largest negative REAL: -1.17549435E-38

          Show
          Knut Anders Hatlen added a comment - Hi Kim, Yes, those limits need to be updated. New values are: Smallest DOUBLE: 4.9e-324 (aka Double.MIN_VALUE) Largest DOUBLE: 1.7976931348623157e+308 (aka Double.MAX_VALUE) Smallest positive DOUBLE: 2.2250738585072014E-308 (aka Double.MIN_NORMAL) Largest negative DOUBLE: -2.2250738585072014E-308 Smallest REAL: 1.4e-45 (aka Float.MIN_VALUE) Largest REAL: 3.4028235e+38 (aka Float.MAX_VALUE) Smallest positive REAL: 1.17549435E-38 (aka Float.MIN_NORMAL) Largest negative REAL: -1.17549435E-38
          Hide
          Kim Haase added a comment -
          Show
          Kim Haase added a comment - Just checking – does this fix have any documentation impact, or should we leave the described limits as they are in http://db.apache.org/derby/docs/10.9/ref/rrefnumericlimits.html , http://db.apache.org/derby/docs/10.9/ref/rrefsqljdoubleprecision.html , and http://db.apache.org/derby/docs/10.9/ref/rrefsqlj14122.html?
          Hide
          Dag H. Wanvik added a comment -

          Uploading #8 with the fix for MIN_NORMAL, committed patch as svn 1448002.

          Commit log:

          Removes the legacy DB limits on floating point numbers.

          This lets the application use Derby for all Java float and double
          values with the exception of NaN (not a number), -0.0 (normalized to
          +0.0) and +/- infinity as defined in the IEEE-754 1985 floating-point
          standard.

          After this change, on soft upgrade, instead of throwing on the DB2
          limits when calling ResultSet#updateXXX or PreparedStatement#setXXX,
          the check throws on ResultSet#updateRow, or #insertRow, and similarly
          on PreparedStatement#execute.

          Upgrade tests test this behavior. But don't run floating
          ResultSet#updateXXX on old releases prior to 10.2 (they don't support
          forward updatable result sets).

          The patch also fixes max display width for real and double numbers
          since some values's printed representation were truncated by ij (even
          before the DB2 limits change). This accounts for the all the canons
          changes.

          Show
          Dag H. Wanvik added a comment - Uploading #8 with the fix for MIN_NORMAL, committed patch as svn 1448002. Commit log: Removes the legacy DB limits on floating point numbers. This lets the application use Derby for all Java float and double values with the exception of NaN (not a number), -0.0 (normalized to +0.0) and +/- infinity as defined in the IEEE-754 1985 floating-point standard. After this change, on soft upgrade, instead of throwing on the DB2 limits when calling ResultSet#updateXXX or PreparedStatement#setXXX, the check throws on ResultSet#updateRow, or #insertRow, and similarly on PreparedStatement#execute. Upgrade tests test this behavior. But don't run floating ResultSet#updateXXX on old releases prior to 10.2 (they don't support forward updatable result sets). The patch also fixes max display width for real and double numbers since some values's printed representation were truncated by ij (even before the DB2 limits change). This accounts for the all the canons changes.
          Hide
          Dag H. Wanvik added a comment -

          Uploading new rev of release notes.

          Show
          Dag H. Wanvik added a comment - Uploading new rev of release notes.
          Hide
          Dag H. Wanvik added a comment -

          Thanks, guys, will make another rev of the release notes with those correction. Knut, you were right about MIN_NORMAL, fixed that, posting a final patch when I have run my regressions again. Setting up my ant.properties so I'll catch that next time around (used to have those settings..)

          Show
          Dag H. Wanvik added a comment - Thanks, guys, will make another rev of the release notes with those correction. Knut, you were right about MIN_NORMAL, fixed that, posting a final patch when I have run my regressions again. Setting up my ant.properties so I'll catch that next time around (used to have those settings..)
          Hide
          Knut Anders Hatlen added a comment -

          Thanks, Dag.

          The patch looks fine to me. You may want to double-check if it builds if you have the j15lib property defined. I suspect it may fail because the tests directly reference the MIN_NORMAL constants, which are new in Java 6.

          In the release note, the summary is supposed to be a single sentence that fits on a single line, so that the release note generator can put it into a bullet list. Maybe just keep the first sentence and move the extra details to the symptoms section?

          Typo in incompatibilities section: "can no" -> "cannot" or, perhaps, "can no longer"?

          In Application Changes Required section: "to detect when use of numbers outside the DB2 limits were used". Drop "use of"?

          Show
          Knut Anders Hatlen added a comment - Thanks, Dag. The patch looks fine to me. You may want to double-check if it builds if you have the j15lib property defined. I suspect it may fail because the tests directly reference the MIN_NORMAL constants, which are new in Java 6. In the release note, the summary is supposed to be a single sentence that fits on a single line, so that the release note generator can put it into a bullet list. Maybe just keep the first sentence and move the extra details to the symptoms section? Typo in incompatibilities section: "can no" -> "cannot" or, perhaps, "can no longer"? In Application Changes Required section: "to detect when use of numbers outside the DB2 limits were used". Drop "use of"?
          Hide
          Kristian Waagan added a comment -

          Thanks for uploading a release note, Dag.

          A few comments:
          o from the template: "This is a one line, one sentence summary of the issue." So I guess the summary is too long and verbose.
          o "To allow applications to use Derby to store a wider range of the Java floating point number types." -> "To allow applications to store a wider range of [the] Java floating point numbers in Derby"?
          (simplification and matter of preference, feel free to ignore )
          o "The application can no rely on": "no" -> "not" ?
          o clean up / simplify "to detect when use of numbers outside the DB2 limits were used,"?

          Show
          Kristian Waagan added a comment - Thanks for uploading a release note, Dag. A few comments: o from the template: "This is a one line, one sentence summary of the issue." So I guess the summary is too long and verbose. o "To allow applications to use Derby to store a wider range of the Java floating point number types." -> "To allow applications to store a wider range of [the] Java floating point numbers in Derby"? (simplification and matter of preference, feel free to ignore ) o "The application can no rely on": "no" -> "not" ? o clean up / simplify "to detect when use of numbers outside the DB2 limits were used,"?
          Hide
          Dag H. Wanvik added a comment -

          Uploading release notes for this issue.

          Show
          Dag H. Wanvik added a comment - Uploading release notes for this issue.
          Hide
          Dag H. Wanvik added a comment - - edited

          I should add, the patch also adresses Knut's comments.

          Show
          Dag H. Wanvik added a comment - - edited I should add, the patch also adresses Knut's comments.
          Hide
          Dag H. Wanvik added a comment - - edited

          Uploading patch #7 after rerunning regressions and finding issues.

          It changes the metadata maximum display size for REAL and DOUBLE to 15 and 24, from 13 and 22 respectively, since some values were truncated by ij.

          The patch adds many canon changes, and also disables the ResultSet#updateXXX part of the Changes10_10 test for pre-10.2 releases since they do not support forward updatable result sets.

          The regressions ran ok with these changes modulo an error in derbynet/DerbyNetAutoStart.java, but I see that on trunk as well.

          Show
          Dag H. Wanvik added a comment - - edited Uploading patch #7 after rerunning regressions and finding issues. It changes the metadata maximum display size for REAL and DOUBLE to 15 and 24, from 13 and 22 respectively, since some values were truncated by ij. The patch adds many canon changes, and also disables the ResultSet#updateXXX part of the Changes10_10 test for pre-10.2 releases since they do not support forward updatable result sets. The regressions ran ok with these changes modulo an error in derbynet/DerbyNetAutoStart.java, but I see that on trunk as well.
          Hide
          Dag H. Wanvik added a comment - - edited

          Thanks, Knut. As for SQLDataException, now I have run this once and I'm satisfied the right exception subclass is used, I guess we can move it back to SQLException; we have other tests that check the subclasses (TestJDBC40EException).

          I used some other constants to test that we accept smaller values than DB2_SMALLEST_POSITIVE_REAL & friends. For example in table beyondDB2Real I used the constant "+1.174E-37f", smaller than DB2_SMALLEST_POSITIVE_REAL (+1.175E-37f), but we can add Float.MIN_NORMAL (1.17549435E-38) too, sure.

          I wasn't aware of the class loader issues you mention, I'll fix that.

          Show
          Dag H. Wanvik added a comment - - edited Thanks, Knut. As for SQLDataException, now I have run this once and I'm satisfied the right exception subclass is used, I guess we can move it back to SQLException; we have other tests that check the subclasses (TestJDBC40EException). I used some other constants to test that we accept smaller values than DB2_SMALLEST_POSITIVE_REAL & friends. For example in table beyondDB2Real I used the constant "+1.174E-37f", smaller than DB2_SMALLEST_POSITIVE_REAL (+1.175E-37f), but we can add Float.MIN_NORMAL (1.17549435E-38) too, sure. I wasn't aware of the class loader issues you mention, I'll fix that.
          Hide
          Knut Anders Hatlen added a comment -

          Looks fine to me. It won't be fully consistent with the client driver until DERBY-5534 is fixed, but now at least it should be possible to fix DERBY-5534 without adding knowledge about the DB2 limits to the client driver.

          Some small comments:

          • The changes to DataDictionaryImpl look like leftovers from an earlier patch.
          • Changes10_10 imports sun.print.PSStreamPrintService. It shouldn't.
          • Changes10_10 uses SQLDataException. Will that work on older platforms (before JDBC 4.0)?
          • It might be worth adding test cases for +/- Float.MIN_NORMAL and +/- Double.MIN_NORMAL. Cannot use the constants directly, as they were added in Java 6, but the values could be tested. Those would test that we're no longer limited by DB2_SMALLEST_POSITIVE_REAL, DB2_LARGEST_NEGATIVE_REAL, DB2_SMALLEST_POSITIVE_DOUBLE and DB2_LARGEST_NEGATIVE_DOUBLE.
          • Nit: The two normalizeREAL() methods use different casing of "invalid". I prefer the lowercase variant.
          • Typo: Comment in NumberDataType.useDB2Limits() says "do no" instead of "do not".
          • Changes10_10: The ResultSets produced in the first for loop aren't closed.
          • Changes10_10: There are three PreparedStatements stored in fields. This technique has been a frequent source of memory leaks in the tests, so it is best avoided if possible. In this case, I think it will prevent the class loaders for the old engines to be garbage collected after the test has completed. Setting the fields to null in tearDown() would help, but making them local variables is probably less error-prone.
          Show
          Knut Anders Hatlen added a comment - Looks fine to me. It won't be fully consistent with the client driver until DERBY-5534 is fixed, but now at least it should be possible to fix DERBY-5534 without adding knowledge about the DB2 limits to the client driver. Some small comments: The changes to DataDictionaryImpl look like leftovers from an earlier patch. Changes10_10 imports sun.print.PSStreamPrintService. It shouldn't. Changes10_10 uses SQLDataException. Will that work on older platforms (before JDBC 4.0)? It might be worth adding test cases for +/- Float.MIN_NORMAL and +/- Double.MIN_NORMAL. Cannot use the constants directly, as they were added in Java 6, but the values could be tested. Those would test that we're no longer limited by DB2_SMALLEST_POSITIVE_REAL, DB2_LARGEST_NEGATIVE_REAL, DB2_SMALLEST_POSITIVE_DOUBLE and DB2_LARGEST_NEGATIVE_DOUBLE. Nit: The two normalizeREAL() methods use different casing of "invalid". I prefer the lowercase variant. Typo: Comment in NumberDataType.useDB2Limits() says "do no" instead of "do not". Changes10_10: The ResultSets produced in the first for loop aren't closed. Changes10_10: There are three PreparedStatements stored in fields. This technique has been a frequent source of memory leaks in the tests, so it is best avoided if possible. In this case, I think it will prevent the class loaders for the old engines to be garbage collected after the test has completed. Setting the fields to null in tearDown() would help, but making them local variables is probably less error-prone.
          Hide
          Dag H. Wanvik added a comment -

          Uploading patch #5, which implements the deferred check on DB2 limits in soft upgrade mode, when no lcc is present, cf. discussion above.

          The upgrade tests now verify this distinction, cf. the "defer" parameter to Changes10_10#verifyDB2Behavior.

          The patch also adds upgrade tests for the ResultSet#updateXXX case.

          Rerunning regressions.

          Show
          Dag H. Wanvik added a comment - Uploading patch #5, which implements the deferred check on DB2 limits in soft upgrade mode, when no lcc is present, cf. discussion above. The upgrade tests now verify this distinction, cf. the "defer" parameter to Changes10_10#verifyDB2Behavior. The patch also adds upgrade tests for the ResultSet#updateXXX case. Rerunning regressions.
          Hide
          Dag H. Wanvik added a comment -

          Thanks, guys. What you suggest could be a way forward here. The only downside I see is that applications in soft upgrade mode relying on the DB limits checks would see a deferred checking on embedded in the lcc-less scenarios, whereas before soft upgrade they got an immediate check.
          I think we could accept that small behavior change and handle it with a release note. Does anybody think this is (too) risky?

          Show
          Dag H. Wanvik added a comment - Thanks, guys. What you suggest could be a way forward here. The only downside I see is that applications in soft upgrade mode relying on the DB limits checks would see a deferred checking on embedded in the lcc-less scenarios, whereas before soft upgrade they got an immediate check. I think we could accept that small behavior change and handle it with a release note. Does anybody think this is (too) risky?
          Hide
          Knut Anders Hatlen added a comment -

          One could also imagine the following behaviour, consistent across the drivers:

          • immediate checking of +/- Infinity and NaN in both client and embedded
          • deferred checking of DB2 limits in both client and embedded

          Since DB2 limits will no longer be checked in new databases, that would mean all range checks happen immediately in upgraded/fresh databases.

          Deferred checks will only happen in soft upgraded databases. Applications that cannot live with the deferred checks could upgrade the database dictionary, and the deferred checks will go away.

          Show
          Knut Anders Hatlen added a comment - One could also imagine the following behaviour, consistent across the drivers: immediate checking of +/- Infinity and NaN in both client and embedded deferred checking of DB2 limits in both client and embedded Since DB2 limits will no longer be checked in new databases, that would mean all range checks happen immediately in upgraded/fresh databases. Deferred checks will only happen in soft upgraded databases. Applications that cannot live with the deferred checks could upgrade the database dictionary, and the deferred checks will go away.
          Hide
          Kristian Waagan added a comment -

          I talked briefly to Dag on this topic, and suggested that a deferred check would be somewhat less user-friendly than an immediate check.

          However, the drawbacks of deferred checking in the embedded driver may be acceptable if
          o it makes the drivers behave consistently
          o it simplifies the relevant version check logic in the client driver (i.e. the checking is only performed on the server side)

          Show
          Kristian Waagan added a comment - I talked briefly to Dag on this topic, and suggested that a deferred check would be somewhat less user-friendly than an immediate check. However, the drawbacks of deferred checking in the embedded driver may be acceptable if o it makes the drivers behave consistently o it simplifies the relevant version check logic in the client driver (i.e. the checking is only performed on the server side)
          Hide
          Knut Anders Hatlen added a comment -

          In the cases where we don't have a language connection context (updateXXX() and setXXX()) the values will be checked in a later step (updateRow() and execute()), so we still won't be able to insert illegal values if we skip the range check in soft upgrade mode when the language connection context is not available. That wouldn't be too bad, I think. In fact, that's consistent with how the range checking is done in the client driver. It too defers range checking till updateRow() or execute().

          Show
          Knut Anders Hatlen added a comment - In the cases where we don't have a language connection context (updateXXX() and setXXX()) the values will be checked in a later step (updateRow() and execute()), so we still won't be able to insert illegal values if we skip the range check in soft upgrade mode when the language connection context is not available. That wouldn't be too bad, I think. In fact, that's consistent with how the range checking is done in the client driver. It too defers range checking till updateRow() or execute().
          Hide
          Dag H. Wanvik added a comment - - edited

          Uploading a patch (#4) which changes the way we check dictionary version in NumberDataType:

          On seeing values outside the old DB2 ranges that are otherwise legal (i.e. not infinity or NaN), we check if the current dictionary level is high enough.
          Iff we have an lcc (language connection context, we use that to find the dictionary version). If not we rely on a new short field in NumberDataType poked into the DVD at the time the value is set (ResultSret#updateXXX, PreparedStatement#setXXX). This is a similar approach to that used by Kristian when handling the CLOB header versioning. It seems to work, but running full regressions now, and I'll add more tests later if we decide to use this approach.

          There are two downsides, one minor and one slightly less so:

          • iff the value is outside the old limist (and only then), we some some extra (costly?) processing to check the dictionary version.
          • we have added a short member to all NumberDataType classes (increases heap usage by 4 or 8 bytes pr value object depending on the VMs allocation strategy.)

          As before, the patch relies on the patch for DERBY-5546. Another quirk shows up though; for some reason when I compile the patched source, I get an error: it seems the compilation of the iapi.types classes trigger a compile of EmbedResultSet, which fails, presumably since the iapi classes are compiled with a too low source level: javac complains about not finding enough implementations in ERS. An incremental compile of the patch works though. I'll look into why this happens.

          Show
          Dag H. Wanvik added a comment - - edited Uploading a patch (#4) which changes the way we check dictionary version in NumberDataType: On seeing values outside the old DB2 ranges that are otherwise legal (i.e. not infinity or NaN), we check if the current dictionary level is high enough. Iff we have an lcc (language connection context, we use that to find the dictionary version). If not we rely on a new short field in NumberDataType poked into the DVD at the time the value is set (ResultSret#updateXXX, PreparedStatement#setXXX). This is a similar approach to that used by Kristian when handling the CLOB header versioning. It seems to work, but running full regressions now, and I'll add more tests later if we decide to use this approach. There are two downsides, one minor and one slightly less so: iff the value is outside the old limist (and only then), we some some extra (costly?) processing to check the dictionary version. we have added a short member to all NumberDataType classes (increases heap usage by 4 or 8 bytes pr value object depending on the VMs allocation strategy.) As before, the patch relies on the patch for DERBY-5546 . Another quirk shows up though; for some reason when I compile the patched source, I get an error: it seems the compilation of the iapi.types classes trigger a compile of EmbedResultSet, which fails, presumably since the iapi classes are compiled with a too low source level: javac complains about not finding enough implementations in ERS. An incremental compile of the patch works though. I'll look into why this happens.
          Hide
          Dag H. Wanvik added a comment -

          It's actually tricky to get the current dictionary version deep in the type machinery.. In a result set we do not necessarily have an lcc context, either.
          Suggestions? (I am loath to add extra parameters to all the type methods....).

          Show
          Dag H. Wanvik added a comment - It's actually tricky to get the current dictionary version deep in the type machinery.. In a result set we do not necessarily have an lcc context, either. Suggestions? (I am loath to add extra parameters to all the type methods....).
          Hide
          Dag H. Wanvik added a comment -

          Thanks for detecting the boolean howler, Knut. As for accepting the values always, I am sceptical, cf this example (your scenario post soft upgrade insert):

          ij> select d from t10_9;
          D
          ----------------------
          1.7976931348623157E308

          ij> select d*1.0 from t10_9;
          1
          ----------------------
          ERROR 22003: The resulting value is outside the range for the data type DOUBLE.

          ij> update t2 set d=(select max(d) from t10_9);
          ERROR XJ001: Java exception: 'ASSERT FAILED error on clone, value = 1.7976931348623157E308 isnull = false: org.apache.derby.shared.common.sanity.AssertFailure'.
          ERROR 22003: The resulting value is outside the range for the data type DOUBLE.

          So it seems the normalization is only skipped in certain contexts...

          Show
          Dag H. Wanvik added a comment - Thanks for detecting the boolean howler, Knut. As for accepting the values always, I am sceptical, cf this example (your scenario post soft upgrade insert): ij> select d from t10_9; D ---------------------- 1.7976931348623157E308 ij> select d*1.0 from t10_9; 1 ---------------------- ERROR 22003: The resulting value is outside the range for the data type DOUBLE. ij> update t2 set d=(select max(d) from t10_9); ERROR XJ001: Java exception: 'ASSERT FAILED error on clone, value = 1.7976931348623157E308 isnull = false: org.apache.derby.shared.common.sanity.AssertFailure'. ERROR 22003: The resulting value is outside the range for the data type DOUBLE. So it seems the normalization is only skipped in certain contexts...
          Hide
          Knut Anders Hatlen added a comment -

          I tried this simple upgrade test with the #2 patch (that is, before the upgrade logic was added):

          1) Create a new database with 10.9.1.0 and create a table: create table t(d double);

          2) Attempt to insert a previously illegal value:

          ij> insert into t values 1.7976931348623157e+308;
          ERROR 22003: The resulting value is outside the range for the data type DOUBLE. (errorCode = 30000)

          3) Boot the database with the #2 patch and re-run the above insert statement (which now succeeds).

          4) Reboot the database with 10.9.1.0 and read the table:

          ij> select * from t;
          D
          ----------------------
          1.7976931348623157E308

          1 row selected

          So from this test case, it looks like the old versions are able to read values that are out of range from a table. So maybe it would be OK to accept the values unconditionally in the new versions?

          Show
          Knut Anders Hatlen added a comment - I tried this simple upgrade test with the #2 patch (that is, before the upgrade logic was added): 1) Create a new database with 10.9.1.0 and create a table: create table t(d double); 2) Attempt to insert a previously illegal value: ij> insert into t values 1.7976931348623157e+308; ERROR 22003: The resulting value is outside the range for the data type DOUBLE. (errorCode = 30000) 3) Boot the database with the #2 patch and re-run the above insert statement (which now succeeds). 4) Reboot the database with 10.9.1.0 and read the table: ij> select * from t; D ---------------------- 1.7976931348623157E308 1 row selected So from this test case, it looks like the old versions are able to read values that are out of range from a table. So maybe it would be OK to accept the values unconditionally in the new versions?
          Hide
          Knut Anders Hatlen added a comment -

          Will the static boolean flag be able to keep track of the correct behaviour for each database if there are multiple open databases in the same JVM?

          Show
          Knut Anders Hatlen added a comment - Will the static boolean flag be able to keep track of the correct behaviour for each database if there are multiple open databases in the same JVM?
          Hide
          Dag H. Wanvik added a comment - - edited

          Uploading patch derby-3398-3, which replaces #1 and # 2, adding upgrade logic and tests.
          The DB2 limits will only be removed on hard upgrade to >= 10.10.
          Rerunning regressions.

          Show
          Dag H. Wanvik added a comment - - edited Uploading patch derby-3398-3, which replaces #1 and # 2, adding upgrade logic and tests. The DB2 limits will only be removed on hard upgrade to >= 10.10. Rerunning regressions.
          Hide
          Dag H. Wanvik added a comment -

          The changes broke some regression tests, uploading new patch that fixes these. No new tests added yet. Rerunning regressions.

          Show
          Dag H. Wanvik added a comment - The changes broke some regression tests, uploading new patch that fixes these. No new tests added yet. Rerunning regressions.
          Hide
          Knut Anders Hatlen added a comment -

          Storing NaN and +/- Infinity is tracked by DERBY-3290.

          Show
          Knut Anders Hatlen added a comment - Storing NaN and +/- Infinity is tracked by DERBY-3290 .
          Hide
          Dag H. Wanvik added a comment -

          Thanks, Knut. Yes, I'll be adding more tests surely. The upgrade question needs considering. Probably we should not remove the old limits if we are in soft update - just to be safe. The limits are all in the engine, so client/server versions shouldn't be an issue. The issue would be even more urgent of we allow NaN: We still reject to store/deal with NaN and +/-Infinity. We could consider lifting those, too, but that's for another issue I think. I see that PostGreSQL allows them:

          http://www.postgresql.org/docs/8.3/static/datatype-numeric.html

          but it requires handling equality and sorting too. From the PostGreSQL docs:

          "Note: IEEE754 specifies that NaN should not compare equal to any other floating-point value (including NaN). In order to allow floating-point values to be sorted and used in tree-based indexes, PostgreSQL treats NaN values as equal, and greater than all non-NaN values."

          Show
          Dag H. Wanvik added a comment - Thanks, Knut. Yes, I'll be adding more tests surely. The upgrade question needs considering. Probably we should not remove the old limits if we are in soft update - just to be safe. The limits are all in the engine, so client/server versions shouldn't be an issue. The issue would be even more urgent of we allow NaN: We still reject to store/deal with NaN and +/-Infinity. We could consider lifting those, too, but that's for another issue I think. I see that PostGreSQL allows them: http://www.postgresql.org/docs/8.3/static/datatype-numeric.html but it requires handling equality and sorting too. From the PostGreSQL docs: "Note: IEEE754 specifies that NaN should not compare equal to any other floating-point value (including NaN). In order to allow floating-point values to be sorted and used in tree-based indexes, PostgreSQL treats NaN values as equal, and greater than all non-NaN values."
          Hide
          Knut Anders Hatlen added a comment -

          Thanks, Dag. I think this is a useful improvement.

          Some ideas for more tests:

          • Use the extreme values in some additional contexts. For example in arithmetic operations and as literals.
          • What happens if a previously not allowed value is stored in a soft-upgraded database and then later attempted read after downgrade?
          • The patch has test cases for MIN_VALUE and MAX_VALUE. Should probably also test for +/- MIN_NORMAL, which used to be out of range, but should be supported now.
          Show
          Knut Anders Hatlen added a comment - Thanks, Dag. I think this is a useful improvement. Some ideas for more tests: Use the extreme values in some additional contexts. For example in arithmetic operations and as literals. What happens if a previously not allowed value is stored in a soft-upgraded database and then later attempted read after downgrade? The patch has test cases for MIN_VALUE and MAX_VALUE. Should probably also test for +/- MIN_NORMAL, which used to be out of range, but should be supported now.
          Hide
          Dag H. Wanvik added a comment - - edited

          Uploading a patch which removes this DB2 restriction, and adds tests. Running regressions
          The patch builds on (requires) the patch for DERBY-5546.

          Show
          Dag H. Wanvik added a comment - - edited Uploading a patch which removes this DB2 restriction, and adds tests. Running regressions The patch builds on (requires) the patch for DERBY-5546 .

            People

            • Assignee:
              Dag H. Wanvik
              Reporter:
              Knut Anders Hatlen
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development