Derby
  1. Derby
  2. DERBY-5488

Add remaining JDBC 4.1 bits which did not appear in the Java 7 javadoc.

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 10.9.1.0
    • Fix Version/s: 10.9.1.0
    • Component/s: JDBC, SQL
    • Labels:
      None
    • Urgency:
      Normal
    • Issue & fix info:
      Release Note Needed

      Description

      In addition to the JDBC 4.1 bits which were visible in the Java 7 javadoc, a couple other items appear in the JDBC 4.1 Maintenance Review spec. This spec has been published on the JCP website at http://download.oracle.com/otndocs/jcp/jdbc-4_1-mrel-eval-spec/index.html. I will attach a functional spec for the remaining bits.

      1. derby-5488-01-aa-objectMappingAndConversion.diff
        10 kB
        Rick Hillegas
      2. derby-5488-02-aa-fixBigInteger.diff
        8 kB
        Rick Hillegas
      3. derby-5488-03-ac-moveDecimalSetterGetterAndTest.diff
        11 kB
        Rick Hillegas
      4. derby-5488-04-aa-fixBigIntegerDecimal.diff
        6 kB
        Rick Hillegas
      5. derby-5488-05-ad-limitOffset.diff
        34 kB
        Rick Hillegas
      6. derby-5488-06-aa-limitOffsetTests.diff
        44 kB
        Rick Hillegas
      7. derby-5488-07-aa-booleanObjects.diff
        5 kB
        Rick Hillegas
      8. derby-5488-08-aa-extraLimitOffsetTest.diff
        2 kB
        Rick Hillegas
      9. derby-5488-09-aa-jdbcMinorVersion.diff
        2 kB
        Rick Hillegas
      10. derby-5488-10-aa-metadataTypo.diff
        3 kB
        Rick Hillegas
      11. derby-5488-10-ab-metadataTypo.diff
        6 kB
        Rick Hillegas
      12. derby-5488-10-ac-metadataTypo.diff
        9 kB
        Rick Hillegas
      13. derby-5488-11-aa-javadoc.diff
        7 kB
        Rick Hillegas
      14. fix-jdbc30-test.diff
        1 kB
        Knut Anders Hatlen
      15. JDBC_4.1_Supplement.html
        5 kB
        Rick Hillegas
      16. releaseNote.html
        3 kB
        Rick Hillegas
      17. z.java
        0.9 kB
        Rick Hillegas

        Issue Links

          Activity

          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-11-aa-javadoc.diff. This adjusts the top level index.html to note that the JDBC 4.1 api should be consulted if you are running on Java 6 or higher. This also makes corresponding changes to the javadoc for the JDBC 4.1 DataSources.

          Committed at subversion revision 1345197. Ported to 10.9 branch at subversion revision 1345204.

          Touches the following files:

          M java/engine/org/apache/derby/jdbc/EmbeddedXADataSource40.java
          M java/engine/org/apache/derby/jdbc/EmbeddedDataSource40.java
          M java/engine/org/apache/derby/jdbc/EmbeddedConnectionPoolDataSource40.java
          M java/client/org/apache/derby/jdbc/ClientXADataSource40.java
          M java/client/org/apache/derby/jdbc/ClientConnectionPoolDataSource40.java
          M java/client/org/apache/derby/jdbc/ClientDataSource40.java
          M index.html

          Show
          Rick Hillegas added a comment - Attaching derby-5488-11-aa-javadoc.diff. This adjusts the top level index.html to note that the JDBC 4.1 api should be consulted if you are running on Java 6 or higher. This also makes corresponding changes to the javadoc for the JDBC 4.1 DataSources. Committed at subversion revision 1345197. Ported to 10.9 branch at subversion revision 1345204. Touches the following files: M java/engine/org/apache/derby/jdbc/EmbeddedXADataSource40.java M java/engine/org/apache/derby/jdbc/EmbeddedDataSource40.java M java/engine/org/apache/derby/jdbc/EmbeddedConnectionPoolDataSource40.java M java/client/org/apache/derby/jdbc/ClientXADataSource40.java M java/client/org/apache/derby/jdbc/ClientConnectionPoolDataSource40.java M java/client/org/apache/derby/jdbc/ClientDataSource40.java M index.html
          Hide
          Rick Hillegas added a comment -

          Re-open issue in order to attach a new patch.

          Show
          Rick Hillegas added a comment - Re-open issue in order to attach a new patch.
          Hide
          Rick Hillegas added a comment -

          Thanks for thinking about the JDBC level supported by 10.9.1.0. I will fix the wording of index.html in the trunk and the 10.9 branch. We'll pick up the better wording if we need to build a second release candidate.

          The JDBC version level when running on Java 6 has caused confusion before. A case could be made for either 4.0 or 4.1. We settled on 4.1. Our reasoning can be found in the comments above, logged on 2011-11-15.

          Thanks,
          -Rick

          Show
          Rick Hillegas added a comment - Thanks for thinking about the JDBC level supported by 10.9.1.0. I will fix the wording of index.html in the trunk and the 10.9 branch. We'll pick up the better wording if we need to build a second release candidate. The JDBC version level when running on Java 6 has caused confusion before. A case could be made for either 4.0 or 4.1. We settled on 4.1. Our reasoning can be found in the comments above, logged on 2011-11-15. Thanks, -Rick
          Hide
          fpientka added a comment -

          add Java SE 7 in
          JDBC 4.0 Public API - Consult this javadoc if your application runs on Java SE 6 and Java SE 7.
          in index.html

          Show
          fpientka added a comment - add Java SE 7 in JDBC 4.0 Public API - Consult this javadoc if your application runs on Java SE 6 and Java SE 7. in index.html
          Hide
          fpientka added a comment -

          The JDBC-Version (Java SE 6 - JDBC 4.1) shown in org.apache.derby.mbeans.JDBCMBean from derby.jar] 10.9.1.0 - (1344872) and JRE 6 is wrong, should be Java SE 6 - JDBC 4.0
          But with JRE 7 it's OK (Java SE 7 - JDBC 4.1)

          Show
          fpientka added a comment - The JDBC-Version (Java SE 6 - JDBC 4.1) shown in org.apache.derby.mbeans.JDBCMBean from derby.jar] 10.9.1.0 - (1344872) and JRE 6 is wrong, should be Java SE 6 - JDBC 4.0 But with JRE 7 it's OK (Java SE 7 - JDBC 4.1)
          Hide
          Rick Hillegas added a comment -

          Thanks, Kristian. I believe the extra JDBC 4.1 bits have been added. I am resolving this issue.

          Show
          Rick Hillegas added a comment - Thanks, Kristian. I believe the extra JDBC 4.1 bits have been added. I am resolving this issue.
          Hide
          Kristian Waagan added a comment -

          To me it seems the newest patch has been committed, so I cleared the patch available flag.
          What's the status on this issue now?

          Show
          Kristian Waagan added a comment - To me it seems the newest patch has been committed, so I cleared the patch available flag. What's the status on this issue now?
          Hide
          Rick Hillegas added a comment -

          Committed derby-5488-10-ac-metadataTypo.diff to trunk at subversion revision 1204684.

          Show
          Rick Hillegas added a comment - Committed derby-5488-10-ac-metadataTypo.diff to trunk at subversion revision 1204684.
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-10-ac-metadataTypo.diff. The previous version of the patch raised 2 errors in ViewTest, caused by the fact that DatabaseMetaData.getColumns() now returns a ResultSet with 25 columns rather than 24 columns. This new version of the patch adjusts ViewTest to expect 25 columns.

          Touches the following additional file:

          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/ViewsTest.java

          Show
          Rick Hillegas added a comment - Attaching derby-5488-10-ac-metadataTypo.diff. The previous version of the patch raised 2 errors in ViewTest, caused by the fact that DatabaseMetaData.getColumns() now returns a ResultSet with 25 columns rather than 24 columns. This new version of the patch adjusts ViewTest to expect 25 columns. Touches the following additional file: M java/testing/org/apache/derbyTesting/functionTests/tests/lang/ViewsTest.java
          Hide
          Rick Hillegas added a comment -

          Attaching a first rev of a release note to describe backward incompatibilities introduced by renaming a metadata column.

          Show
          Rick Hillegas added a comment - Attaching a first rev of a release note to describe backward incompatibilities introduced by renaming a metadata column.
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-10-ab-metadataTypo.diff. This version of the patch adds a redundant SCOPE_CATLOG column to the end of the ResultSet returned by DatabaseMetaData.getColumns() as Lance and Knut suggested. This bit of defensive logic should reduce the risk of backward incompatibility problems. I am running regression tests now.

          Touches the same files as the previous version of this patch.

          Show
          Rick Hillegas added a comment - Attaching derby-5488-10-ab-metadataTypo.diff. This version of the patch adds a redundant SCOPE_CATLOG column to the end of the ResultSet returned by DatabaseMetaData.getColumns() as Lance and Knut suggested. This bit of defensive logic should reduce the risk of backward incompatibility problems. I am running regression tests now. Touches the same files as the previous version of this patch.
          Hide
          Rick Hillegas added a comment -

          Thanks for reviewing the issues raised by derby-5488-10-aa-metadataTypo.diff, Knut. In private email, JDBC spec lead Lance Andersen also raised the possibility of letting the DatabaseMetaData.getColumns() ResultSet recognize SCOPE_CATLOG as a column name as well as SCOPE_CATALOG. As you note, vendors are allowed to add extra columns to these ResultSets.

          I agree that the extra SCOPE_CATLOG column would be a useful piece of defensive logic. I will add it in a revised version of this patch. Note that the position of the extra SCOPE_CATLOG column will change if a future rev of the JDBC spec adds more columns to the end of this ResultSet. That will affect users who do not pay attention to the warning that vendor-specific columns should only be referenced by name, not by position. If needed, we can address that issue in a future release note accompanying our implementation of those spec changes.

          Thanks,
          -Rick

          Show
          Rick Hillegas added a comment - Thanks for reviewing the issues raised by derby-5488-10-aa-metadataTypo.diff, Knut. In private email, JDBC spec lead Lance Andersen also raised the possibility of letting the DatabaseMetaData.getColumns() ResultSet recognize SCOPE_CATLOG as a column name as well as SCOPE_CATALOG. As you note, vendors are allowed to add extra columns to these ResultSets. I agree that the extra SCOPE_CATLOG column would be a useful piece of defensive logic. I will add it in a revised version of this patch. Note that the position of the extra SCOPE_CATLOG column will change if a future rev of the JDBC spec adds more columns to the end of this ResultSet. That will affect users who do not pay attention to the warning that vendor-specific columns should only be referenced by name, not by position. If needed, we can address that issue in a future release note accompanying our implementation of those spec changes. Thanks, -Rick
          Hide
          Rick Hillegas added a comment -

          Thanks for fixing that typo, Knut.

          Show
          Rick Hillegas added a comment - Thanks for fixing that typo, Knut.
          Hide
          Knut Anders Hatlen added a comment -

          The approach in derby-5488-10-aa-metadataTypo.diff looks reasonable to me. It's the simplest solution, and it should be easy enough to add a release note to explain the impact on existing applications.

          If we were to code it more defensively, I think I'd prefer to rename column 19 to SCOPE_CATALOG and add a 25th column called SCOPE_CATLOG. That way, the applications that call rs.getString(19), as well as those that call rs.getString("SCOPE_CATLOG"), will continue to work, regardless of server/client versions. And so will applications that use the correctly spelled rs.getString("SCOPE_CATALOG"), provided that the server version is at least 10.9.

          The spec allows us to add columns this way, see this sentence in the javadoc for java.sql.DatabaseMetaData: "Additional columns beyond the columns defined to be returned by the ResultSet object for a given method can be defined by the JDBC driver vendor and must be accessed by their column label."

          Show
          Knut Anders Hatlen added a comment - The approach in derby-5488-10-aa-metadataTypo.diff looks reasonable to me. It's the simplest solution, and it should be easy enough to add a release note to explain the impact on existing applications. If we were to code it more defensively, I think I'd prefer to rename column 19 to SCOPE_CATALOG and add a 25th column called SCOPE_CATLOG. That way, the applications that call rs.getString(19), as well as those that call rs.getString("SCOPE_CATLOG"), will continue to work, regardless of server/client versions. And so will applications that use the correctly spelled rs.getString("SCOPE_CATALOG"), provided that the server version is at least 10.9. The spec allows us to add columns this way, see this sentence in the javadoc for java.sql.DatabaseMetaData: "Additional columns beyond the columns defined to be returned by the ResultSet object for a given method can be defined by the JDBC driver vendor and must be accessed by their column label."
          Hide
          Knut Anders Hatlen added a comment -

          derby-5488-09-aa-jdbcMinorVersion.diff had a typo that made DatabaseMetaDataTest fail on Java 5 (http://dbtg.foundry.sun.com/derby/test/Daily/jvm1.5/testing/Limited/testSummary-1204020.html). The attached patch (fix-jdbc30-test.diff) corrects the typo and makes DatabaseMetaDataTest pass with Java 5, 6 and 7 in my environment.

          Committed revision 1204432.

          Show
          Knut Anders Hatlen added a comment - derby-5488-09-aa-jdbcMinorVersion.diff had a typo that made DatabaseMetaDataTest fail on Java 5 ( http://dbtg.foundry.sun.com/derby/test/Daily/jvm1.5/testing/Limited/testSummary-1204020.html ). The attached patch (fix-jdbc30-test.diff) corrects the typo and makes DatabaseMetaDataTest pass with Java 5, 6 and 7 in my environment. Committed revision 1204432.
          Hide
          Rick Hillegas added a comment -

          Committed derby-5488-09-aa-jdbcMinorVersion.diff to trunk at subversion revision 1203754. This changes the JDBC level to 4.1 when running on Java 6 or 7.

          Show
          Rick Hillegas added a comment - Committed derby-5488-09-aa-jdbcMinorVersion.diff to trunk at subversion revision 1203754. This changes the JDBC level to 4.1 when running on Java 6 or 7.
          Hide
          Rick Hillegas added a comment -

          Marking the "Release note needed" flag because of the backward incompatibility introduced by changing SCOPE_CATLOG to SCOPE_CATALOG.

          Show
          Rick Hillegas added a comment - Marking the "Release note needed" flag because of the backward incompatibility introduced by changing SCOPE_CATLOG to SCOPE_CATALOG.
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-10-aa-metadataTypo.diff. This a simple candidate patch to change SCOPE_CATLOG to SCOPE_CATALOG. Regression tests pass cleanly on this patch.

          Before discussing this patch and alternatives we might consider, I want to summarize my understanding of this problem:

          A) The JDBC expert group regards this as fixing a typo in the javadoc. I believe that some other databases recognized the typo for what it was and always named the column SCOPE_CATALOG. Derby, however, hewed closely to the published javadoc and called the column SCOPE_CATLOG.

          B) For those other databases, there is no functional change. A documentation typo has simply been corrected. For Derby, however, the change creates a backward incompatibility.

          C) Derby must break one of its important constraints. There is no way that we can conform to the corrected JDBC javadoc and avoid a backward incompatibility.

          D) I think that the backward incompatibility is quite minor, nevertheless. The column in question carries no meaning for Derby. The column only has meaning for databases which implement both catalogs and reference types. For Derby, the column always contains a null. I doubt that (m)any Derby users inspect this column at all, let alone by name.

          Here are the user-visible effects of some possible solutions:

          1) Based on engine version - The column is called SCOPE_CATALOG if DatabaseMetaData.getDatabaseMajorVersion() and DatabaseMetaData.getDatabaseMinorVersion() report that the engine is at Derby 10.9 or higher. Otherwise, the column is called SCOPE_CATLOG. This is the approach taken by this patch.

          2) Based on client version - The column is called SCOPE_CATALOG if DatabaseMetaData.getDriverMajorVersion() and DatabaseMetaData.getDriverMinorVersion() report that the client is at Derby 10.9 or higher. Otherwise, the column is called SCOPE_CATLOG.

          3) Based on JDBC driver version - The column is called SCOPE_CATALOG if DatabaseMetaData.getJDBCMajorVersion() and DatabaseMetaData.getJDBCMinorVersion() report that the driver is at JDBC 4.1 or higher. Otherwise, the column is called SCOPE_CATLOG.

          Even fancier solutions are possible. They involve combinations of the JDBC and driver versions at the client and engine. I believe that the solutions listed above give rise to straightforward workarounds for applications affected by this change. They are easy to explain. The fancier solutions push more complexity into the application and/or involve backporting tricky code into older Derby branches.

          Of the straightforward solutions, I opted for (1) because it was the easiest to implement. A casual look at options (2) and (3) suggests that they involve adding some potentially tricky code to our JDBC drivers. I did not think that this problem warranted the additional complexity.

          But those are my opinions. I am open to arguments that we should solve this problem a different way.

          Thanks in advance for your feedback.

          Touches the following files:

          ------------

          M java/engine/org/apache/derby/impl/jdbc/metadata.properties
          M java/engine/org/apache/derby/impl/jdbc/EmbedDatabaseMetaData.java

          Actual change to the JDBC metadata.

          ------------

          M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/DatabaseMetaDataTest.java

          Corresponding change to the regression test for this metadata.

          Show
          Rick Hillegas added a comment - Attaching derby-5488-10-aa-metadataTypo.diff. This a simple candidate patch to change SCOPE_CATLOG to SCOPE_CATALOG. Regression tests pass cleanly on this patch. Before discussing this patch and alternatives we might consider, I want to summarize my understanding of this problem: A) The JDBC expert group regards this as fixing a typo in the javadoc. I believe that some other databases recognized the typo for what it was and always named the column SCOPE_CATALOG. Derby, however, hewed closely to the published javadoc and called the column SCOPE_CATLOG. B) For those other databases, there is no functional change. A documentation typo has simply been corrected. For Derby, however, the change creates a backward incompatibility. C) Derby must break one of its important constraints. There is no way that we can conform to the corrected JDBC javadoc and avoid a backward incompatibility. D) I think that the backward incompatibility is quite minor, nevertheless. The column in question carries no meaning for Derby. The column only has meaning for databases which implement both catalogs and reference types. For Derby, the column always contains a null. I doubt that (m)any Derby users inspect this column at all, let alone by name. Here are the user-visible effects of some possible solutions: 1) Based on engine version - The column is called SCOPE_CATALOG if DatabaseMetaData.getDatabaseMajorVersion() and DatabaseMetaData.getDatabaseMinorVersion() report that the engine is at Derby 10.9 or higher. Otherwise, the column is called SCOPE_CATLOG. This is the approach taken by this patch. 2) Based on client version - The column is called SCOPE_CATALOG if DatabaseMetaData.getDriverMajorVersion() and DatabaseMetaData.getDriverMinorVersion() report that the client is at Derby 10.9 or higher. Otherwise, the column is called SCOPE_CATLOG. 3) Based on JDBC driver version - The column is called SCOPE_CATALOG if DatabaseMetaData.getJDBCMajorVersion() and DatabaseMetaData.getJDBCMinorVersion() report that the driver is at JDBC 4.1 or higher. Otherwise, the column is called SCOPE_CATLOG. Even fancier solutions are possible. They involve combinations of the JDBC and driver versions at the client and engine. I believe that the solutions listed above give rise to straightforward workarounds for applications affected by this change. They are easy to explain. The fancier solutions push more complexity into the application and/or involve backporting tricky code into older Derby branches. Of the straightforward solutions, I opted for (1) because it was the easiest to implement. A casual look at options (2) and (3) suggests that they involve adding some potentially tricky code to our JDBC drivers. I did not think that this problem warranted the additional complexity. But those are my opinions. I am open to arguments that we should solve this problem a different way. Thanks in advance for your feedback. Touches the following files: ------------ M java/engine/org/apache/derby/impl/jdbc/metadata.properties M java/engine/org/apache/derby/impl/jdbc/EmbedDatabaseMetaData.java Actual change to the JDBC metadata. ------------ M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/DatabaseMetaDataTest.java Corresponding change to the regression test for this metadata.
          Hide
          Rick Hillegas added a comment -

          Here's another small piece of work. JDBC 4.1 fixes the mis-spelled name of one of the columns in the ResultSet returned by DatabaseMetaData.getColumns(). The column used to be called SCOPE_CATLOG and is now called SCOPE_CATALOG. See DERBY-1279 and DERBY-137.

          Show
          Rick Hillegas added a comment - Here's another small piece of work. JDBC 4.1 fixes the mis-spelled name of one of the columns in the ResultSet returned by DatabaseMetaData.getColumns(). The column used to be called SCOPE_CATLOG and is now called SCOPE_CATALOG. See DERBY-1279 and DERBY-137 .
          Hide
          Rick Hillegas added a comment -

          Thanks for that quick response, Knut. I think that a similar problem can arise in earlier versions of Derby. For instance, if an application is compiled on Java 6 to run at JVM level 1.5 and the application tries to call a JDBC 4.0 method which we implemented in our JDBC 3.0 drivers. If the application were compiled against the JDBC 3.0 libraries (as we do in our Derby builds) then the error would have been caught at compile-time, not run-time. And a solution in both cases is to use reflection to access the methods which don't appear in the older JVM. I tend to think that this is an error on the part of the programmer, not Derby. The programmer is trying to do something that can't work and they have tricked the compiler into not helping them detect this early on.

          However, we are certainly in the confusing space which Bryan is talking about.

          Show
          Rick Hillegas added a comment - Thanks for that quick response, Knut. I think that a similar problem can arise in earlier versions of Derby. For instance, if an application is compiled on Java 6 to run at JVM level 1.5 and the application tries to call a JDBC 4.0 method which we implemented in our JDBC 3.0 drivers. If the application were compiled against the JDBC 3.0 libraries (as we do in our Derby builds) then the error would have been caught at compile-time, not run-time. And a solution in both cases is to use reflection to access the methods which don't appear in the older JVM. I tend to think that this is an error on the part of the programmer, not Derby. The programmer is trying to do something that can't work and they have tricked the compiler into not helping them detect this early on. However, we are certainly in the confusing space which Bryan is talking about.
          Hide
          Knut Anders Hatlen added a comment -

          > Can anyone think of a way that a program would fail because of this proposed Derby behavior?

          Perhaps a little far-fetched, but this small program works on Java 6 if getJDBCMinorVersion() returns 0 and fails if it returns 1:

          import java.sql.*;
          public class Test {
          public static void main(String[] args) throws SQLException {
          Connection c = DriverManager.getConnection("jdbc:derby:memory:db;create=true");

          DatabaseMetaData dmd = c.getMetaData();
          int major = dmd.getJDBCMajorVersion();
          int minor = dmd.getJDBCMinorVersion();

          boolean isAtLeastJDBC41 = (major == 4 && minor >= 1) || major > 4;

          Statement s = c.createStatement();
          ResultSet rs = s.executeQuery("values 1234");
          while (rs.next()) {
          if (isAtLeastJDBC41)

          { Integer i = rs.getObject(1, Integer.class); System.out.println("I:" + i); }

          else

          { int i = rs.getInt(1); System.out.println("i:" + i); }

          }
          }
          }

          You'll need to compile it with the Java 7 compiler and specify -source 1.6 and -target 1.6 to make it run on Java 6.

          With minor version = 0 on Java 6:

          $ java Test
          i:1234

          With minor version = 1 on Java 6:

          $ java Test
          Exception in thread "main" java.lang.NoSuchMethodError: java.sql.ResultSet.getObject(ILjava/lang/Class;)Ljava/lang/Object;
          at Test.main(Test.java:16)

          Show
          Knut Anders Hatlen added a comment - > Can anyone think of a way that a program would fail because of this proposed Derby behavior? Perhaps a little far-fetched, but this small program works on Java 6 if getJDBCMinorVersion() returns 0 and fails if it returns 1: import java.sql.*; public class Test { public static void main(String[] args) throws SQLException { Connection c = DriverManager.getConnection("jdbc:derby:memory:db;create=true"); DatabaseMetaData dmd = c.getMetaData(); int major = dmd.getJDBCMajorVersion(); int minor = dmd.getJDBCMinorVersion(); boolean isAtLeastJDBC41 = (major == 4 && minor >= 1) || major > 4; Statement s = c.createStatement(); ResultSet rs = s.executeQuery("values 1234"); while (rs.next()) { if (isAtLeastJDBC41) { Integer i = rs.getObject(1, Integer.class); System.out.println("I:" + i); } else { int i = rs.getInt(1); System.out.println("i:" + i); } } } } You'll need to compile it with the Java 7 compiler and specify -source 1.6 and -target 1.6 to make it run on Java 6. With minor version = 0 on Java 6: $ java Test i:1234 With minor version = 1 on Java 6: $ java Test Exception in thread "main" java.lang.NoSuchMethodError: java.sql.ResultSet.getObject(ILjava/lang/Class;)Ljava/lang/Object; at Test.main(Test.java:16)
          Hide
          Rick Hillegas added a comment -

          Thanks for that additional analysis, Bryan. I will hold off committing this patch until Friday. Maybe other opinions will surface.

          I agree that the situation for the application programmer is confusing. Can anyone think of a way that a program would fail because of this proposed Derby behavior?

          Thanks,
          -Rick

          Show
          Rick Hillegas added a comment - Thanks for that additional analysis, Bryan. I will hold off committing this patch until Friday. Maybe other opinions will surface. I agree that the situation for the application programmer is confusing. Can anyone think of a way that a program would fail because of this proposed Derby behavior? Thanks, -Rick
          Hide
          Rick Hillegas added a comment -

          Thanks to Bryan and Knut for helping me sort out what JDBC level our Java 6 drivers should report.

          Except for the very highest JDBC level we support, all of our JDBC driver implementations contain methods which were introduced by higher levels of the spec. So for instance...

          1) Our JSR 169 implementation contains lots of methods which were introduced in JDBC 2.0 and 3.0.

          2) Our JDBC 3.0 implementation contains some methods which were introduced by JDBC 4.0.

          Nevertheless, those implementations don't claim to fully implement the higher JDBC rev levels from which they borrow methods.

          JDBC 4.1 is an interesting special case. All other JDBC levels introduced data types which did not appear in their predecessors. For this reason...

          1') Our JSR 169 implementation doesn't contain an implementation of the java.sql.ParameterMetaData type which was introduced in JDBC 3.0. A JDBC implementation which runs on small devices cannot provide an implementation of ParameterMetaData and so can not claim to implement JDBC 3.0.

          2') Our JDBC 3.0 implementation doesn't contain methods which mention java.sql.SQLXML, a type which was introduced by JDBC 4.0. A JDBC implementation which runs on Java 5 cannot contain methods which mention SQLXML and so can not claim to implement JDBC 4.0.

          JDBC 4.1 is the first rev of JDBC which does not mention any types which were not available to its predecessor. It is therefore the first rev of JDBC which could be implemented to run on a lower rev level of the JVM.

          So the short answer to Bryan's question about precedents is: No, there is no precedent. The slightly longer answer to Bryan's question is: ...perhaps because the situation is impossible for previous JDBC rev levels.

          Thanks,
          -Rick

          Show
          Rick Hillegas added a comment - Thanks to Bryan and Knut for helping me sort out what JDBC level our Java 6 drivers should report. Except for the very highest JDBC level we support, all of our JDBC driver implementations contain methods which were introduced by higher levels of the spec. So for instance... 1) Our JSR 169 implementation contains lots of methods which were introduced in JDBC 2.0 and 3.0. 2) Our JDBC 3.0 implementation contains some methods which were introduced by JDBC 4.0. Nevertheless, those implementations don't claim to fully implement the higher JDBC rev levels from which they borrow methods. JDBC 4.1 is an interesting special case. All other JDBC levels introduced data types which did not appear in their predecessors. For this reason... 1') Our JSR 169 implementation doesn't contain an implementation of the java.sql.ParameterMetaData type which was introduced in JDBC 3.0. A JDBC implementation which runs on small devices cannot provide an implementation of ParameterMetaData and so can not claim to implement JDBC 3.0. 2') Our JDBC 3.0 implementation doesn't contain methods which mention java.sql.SQLXML, a type which was introduced by JDBC 4.0. A JDBC implementation which runs on Java 5 cannot contain methods which mention SQLXML and so can not claim to implement JDBC 4.0. JDBC 4.1 is the first rev of JDBC which does not mention any types which were not available to its predecessor. It is therefore the first rev of JDBC which could be implemented to run on a lower rev level of the JVM. So the short answer to Bryan's question about precedents is: No, there is no precedent. The slightly longer answer to Bryan's question is: ...perhaps because the situation is impossible for previous JDBC rev levels. Thanks, -Rick
          Hide
          Bryan Pendleton added a comment -

          I am fine with the proposed implementation. I think I was getting somewhat confused between
          compile-time support and run-time support.

          My train-of-thought regarding my reaction was something like: if I am told that a particular
          DB/driver/implementation is JDBC version X.Y, then I expect to look in the Javadocs for
          version X.Y and be able to call those APIs.

          I would not expect to have to use reflection to do so.

          When I look up, e.g., java.sql.PreparedStatement, I expect the "Since:" field to
          help me with this comprehension. This doesn't work perfectly; for example, when I go to:
          http://download.oracle.com/javase/6/docs/api/java/sql/PreparedStatement.html#setRowId(int, java.sql.RowId)
          I see that the setRowId() method is marked "Since: 1.6", and I have to know that this actually means
          "JDBC 4.0", but once I accomplish that, I know that I am calling a method that requires JDBC 4.0.

          I note that when I go to http://www.oracle.com/technetwork/java/javase/jdbc/index.html which
          is presumably the base of all the JDBC spec definitions, I am referred to JDBC documentation
          for the base JDK, not by JDBC version number. That is, it says:

          JDBC documentation: J2SE 1.4.2 | J2SE 5.0 | Java SE 6

          At any rate, returning 4.1 seems fine to me; I think I am expressing a related, but independent,
          confusion that affects the life of the JDBC application programmer

          Show
          Bryan Pendleton added a comment - I am fine with the proposed implementation. I think I was getting somewhat confused between compile-time support and run-time support. My train-of-thought regarding my reaction was something like: if I am told that a particular DB/driver/implementation is JDBC version X.Y, then I expect to look in the Javadocs for version X.Y and be able to call those APIs. I would not expect to have to use reflection to do so. When I look up, e.g., java.sql.PreparedStatement, I expect the "Since:" field to help me with this comprehension. This doesn't work perfectly; for example, when I go to: http://download.oracle.com/javase/6/docs/api/java/sql/PreparedStatement.html#setRowId(int , java.sql.RowId) I see that the setRowId() method is marked "Since: 1.6", and I have to know that this actually means "JDBC 4.0", but once I accomplish that, I know that I am calling a method that requires JDBC 4.0. I note that when I go to http://www.oracle.com/technetwork/java/javase/jdbc/index.html which is presumably the base of all the JDBC spec definitions, I am referred to JDBC documentation for the base JDK, not by JDBC version number. That is, it says: JDBC documentation: J2SE 1.4.2 | J2SE 5.0 | Java SE 6 At any rate, returning 4.1 seems fine to me; I think I am expressing a related, but independent, confusion that affects the life of the JDBC application programmer
          Hide
          Rick Hillegas added a comment -

          Thanks, Bryan. In a related email thread on derby-dev, Knut offered this opinion:

          "Since DatabaseMetaData.getJDBCMinorVersion() is supposed to return the JDBC minor version number for the driver (not for the platform it's running on) and we have a single JDBC 4 driver implementation that implements both JDBC 4.0 and JDBC 4.1, I think it sounds reasonable that our JDBC 4 driver always returns minor version 1."

          Show
          Rick Hillegas added a comment - Thanks, Bryan. In a related email thread on derby-dev, Knut offered this opinion: "Since DatabaseMetaData.getJDBCMinorVersion() is supposed to return the JDBC minor version number for the driver (not for the platform it's running on) and we have a single JDBC 4 driver implementation that implements both JDBC 4.0 and JDBC 4.1, I think it sounds reasonable that our JDBC 4 driver always returns minor version 1."
          Hide
          Bryan Pendleton added a comment -

          > If you run on Java 6, you can still call the 4.1 methods via reflection.
          > For this reason I believe it makes sense to report 4.1 as the JDBC level on Java 6

          Hmmm.. Is there precedent for this behavior? I'm not sure that's what I'd
          naively expect as a user of the JDBC API.

          Show
          Bryan Pendleton added a comment - > If you run on Java 6, you can still call the 4.1 methods via reflection. > For this reason I believe it makes sense to report 4.1 as the JDBC level on Java 6 Hmmm.. Is there precedent for this behavior? I'm not sure that's what I'd naively expect as a user of the JDBC API.
          Hide
          Rick Hillegas added a comment -

          Tests passed cleanly for me except for the known trigger-related errors in the upgrade tests and the errors described on DERBY-5502. I don't think these errors are caused by this patch.

          Show
          Rick Hillegas added a comment - Tests passed cleanly for me except for the known trigger-related errors in the upgrade tests and the errors described on DERBY-5502 . I don't think these errors are caused by this patch.
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-09-aa-jdbcMinorVersion.diff. This patch adjusts the JDBC level reported by Derby's drivers when running on Java 6 or later. The new version is 4.1. I am running tests now.

          Note that 4.1 is the version returned if you are running on Java 6 or Java 7.

          If you run on Java 6, you can still call the 4.1 methods via reflection. For this reason I believe it makes sense to report 4.1 as the JDBC level on Java 6 even though the platform itself only recognizes the 4.0 methods.

          Touches the following files:

          ---------

          M java/engine/org/apache/derby/impl/jdbc/EmbedDatabaseMetaData40.java

          Change to embedded DatabaseMetaData.getJDBCMinorVersion().

          ---------

          M java/client/org/apache/derby/client/net/NetDatabaseMetaData40.java

          Change to network DatabaseMetaData.getJDBCMinorVersion().

          ---------

          M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/DatabaseMetaDataTest.java

          Change to metadata test.

          Show
          Rick Hillegas added a comment - Attaching derby-5488-09-aa-jdbcMinorVersion.diff. This patch adjusts the JDBC level reported by Derby's drivers when running on Java 6 or later. The new version is 4.1. I am running tests now. Note that 4.1 is the version returned if you are running on Java 6 or Java 7. If you run on Java 6, you can still call the 4.1 methods via reflection. For this reason I believe it makes sense to report 4.1 as the JDBC level on Java 6 even though the platform itself only recognizes the 4.0 methods. Touches the following files: --------- M java/engine/org/apache/derby/impl/jdbc/EmbedDatabaseMetaData40.java Change to embedded DatabaseMetaData.getJDBCMinorVersion(). --------- M java/client/org/apache/derby/client/net/NetDatabaseMetaData40.java Change to network DatabaseMetaData.getJDBCMinorVersion(). --------- M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/DatabaseMetaDataTest.java Change to metadata test.
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-08-aa-extraLimitOffsetTest.diff. This patch adds a couple additional limit/offset tests. Committed at subversion revision 1201041.

          Touches the following file:

          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/OffsetFetchNextTest.java

          Show
          Rick Hillegas added a comment - Attaching derby-5488-08-aa-extraLimitOffsetTest.diff. This patch adds a couple additional limit/offset tests. Committed at subversion revision 1201041. Touches the following file: M java/testing/org/apache/derbyTesting/functionTests/tests/lang/OffsetFetchNextTest.java
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-07-aa-booleanObjects.diff. This patch eliminates the Boolean object creation as Dag recommended. Committed at subversion revision 1201025.

          Touches the following files:

          M java/engine/org/apache/derby/impl/sql/compile/SelectNode.java
          M java/engine/org/apache/derby/impl/sql/compile/UnionNode.java
          M java/engine/org/apache/derby/impl/sql/compile/RowResultSetNode.java
          M java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java
          M java/engine/org/apache/derby/impl/sql/compile/sqlgrammar.jj
          M java/engine/org/apache/derby/impl/sql/compile/IntersectOrExceptNode.java

          Show
          Rick Hillegas added a comment - Attaching derby-5488-07-aa-booleanObjects.diff. This patch eliminates the Boolean object creation as Dag recommended. Committed at subversion revision 1201025. Touches the following files: M java/engine/org/apache/derby/impl/sql/compile/SelectNode.java M java/engine/org/apache/derby/impl/sql/compile/UnionNode.java M java/engine/org/apache/derby/impl/sql/compile/RowResultSetNode.java M java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java M java/engine/org/apache/derby/impl/sql/compile/sqlgrammar.jj M java/engine/org/apache/derby/impl/sql/compile/IntersectOrExceptNode.java
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-06-aa-limitOffsetTests.diff. This patch revamps the OffsetFetchNextTest so that test cases are run against both the SQL Standard syntax and the JDBC limit/offset syntax. Committed at subversion revision 1201020.

          Touches the following file:

          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/OffsetFetchNextTest.java

          Show
          Rick Hillegas added a comment - Attaching derby-5488-06-aa-limitOffsetTests.diff. This patch revamps the OffsetFetchNextTest so that test cases are run against both the SQL Standard syntax and the JDBC limit/offset syntax. Committed at subversion revision 1201020. Touches the following file: M java/testing/org/apache/derbyTesting/functionTests/tests/lang/OffsetFetchNextTest.java
          Hide
          Rick Hillegas added a comment -

          Thanks for reading the patch, Dag. I will make the change you recommend in a follow-on patch.

          Show
          Rick Hillegas added a comment - Thanks for reading the patch, Dag. I will make the change you recommend in a follow-on patch.
          Hide
          Dag H. Wanvik added a comment -

          Thanks, Rick. The limit patch looks ok to me, good you took care of that pesky little semantics difference in LIMIT!

          Small nit: I'd change all the Boolean constructors of the kind "newBoolean( hasJDBClimitClause )" to
          "Boolean.valueOf( hasJDBClimitClause )" for performance, cf. this comment in the Javadoc of valueOf:

          "If a new Boolean instance is not required, this method should generally be used in preference to the constructor Boolean(boolean), as this method is likely to to yield significantly better space and time performance. "

          Show
          Dag H. Wanvik added a comment - Thanks, Rick. The limit patch looks ok to me, good you took care of that pesky little semantics difference in LIMIT! Small nit: I'd change all the Boolean constructors of the kind "newBoolean( hasJDBClimitClause )" to "Boolean.valueOf( hasJDBClimitClause )" for performance, cf. this comment in the Javadoc of valueOf: "If a new Boolean instance is not required, this method should generally be used in preference to the constructor Boolean(boolean), as this method is likely to to yield significantly better space and time performance. "
          Hide
          Bryan Pendleton added a comment -

          Thanks Rick, that is exactly what I was confused about, and your answer was very clear.

          I haven't had much experience with JDBC escape notation, so it is very helpful to see the examples.

          I think it would be wonderful if we can cut-and-paste your examples into the documentation somewhere,
          perhaps maybe here:

          http://db.apache.org/derby/docs/10.8/ref/rrefsqljoffsetfetch.html#rrefsqljoffsetfetch

          Show
          Bryan Pendleton added a comment - Thanks Rick, that is exactly what I was confused about, and your answer was very clear. I haven't had much experience with JDBC escape notation, so it is very helpful to see the examples. I think it would be wonderful if we can cut-and-paste your examples into the documentation somewhere, perhaps maybe here: http://db.apache.org/derby/docs/10.8/ref/rrefsqljoffsetfetch.html#rrefsqljoffsetfetch
          Hide
          Rick Hillegas added a comment -

          Hi Bryan,

          Sorry to not be clear. The short answer to your question is that what I am describing is behavior internal to Derby. The user can use either the JDBC escape syntax or the SQL Standard syntax. Internally, Derby keeps track of which syntax the user chose. Hopefully, the following will be more helpful:

          Normally, it's pretty easy for Derby to internally transform the JDBC limit/offset syntax into the corresponding SQL Standard syntax. For instance, the following statement which uses JDBC escape syntax...

          select * from T order by A

          { limit 20 offset 10 }

          ...is treated by Derby as equivalent to the following SQL Standard syntax:

          select * from T order by A offset 10 rows fetch next 20 rows only

          However, the following statement...

          select * from T order by A

          { limit 0 offset 10 }

          ...is not equivalent to...

          select * from T order by A offset 10 rows fetch next 0 rows only

          ...because "fetch next 0 rows" raises an exception. In this case, Derby just ignores the last clause, treating the original statement like...

          select * from T order by A offset 10 rows

          That's all well and good. The tricky part comes when ? parameters pop up.

          select * from T order by A

          { limit ? offset 10 }

          ...is treated like:

          select * from T order by A offset 10 rows fetch next ? rows only

          At run-time, Derby has to know that setting ? equal to 0 is OK if the original statement was the one with JDBC escape syntax, but not OK if the original statement was the one with SQL Standard syntax.

          To make it possible for Derby to make this distinction, I had to pass a boolean all the way from the parser to the run-time logic. The boolean indicates whether the original statement used JDBC escape syntax or SQL Standard syntax. That's what forced me to touch so many files along the way.

          I hope I have not made this more confusing. If this is still puzzling, please let me know.

          Thanks,
          -Rick

          Show
          Rick Hillegas added a comment - Hi Bryan, Sorry to not be clear. The short answer to your question is that what I am describing is behavior internal to Derby. The user can use either the JDBC escape syntax or the SQL Standard syntax. Internally, Derby keeps track of which syntax the user chose. Hopefully, the following will be more helpful: Normally, it's pretty easy for Derby to internally transform the JDBC limit/offset syntax into the corresponding SQL Standard syntax. For instance, the following statement which uses JDBC escape syntax... select * from T order by A { limit 20 offset 10 } ...is treated by Derby as equivalent to the following SQL Standard syntax: select * from T order by A offset 10 rows fetch next 20 rows only However, the following statement... select * from T order by A { limit 0 offset 10 } ...is not equivalent to... select * from T order by A offset 10 rows fetch next 0 rows only ...because "fetch next 0 rows" raises an exception. In this case, Derby just ignores the last clause, treating the original statement like... select * from T order by A offset 10 rows That's all well and good. The tricky part comes when ? parameters pop up. select * from T order by A { limit ? offset 10 } ...is treated like: select * from T order by A offset 10 rows fetch next ? rows only At run-time, Derby has to know that setting ? equal to 0 is OK if the original statement was the one with JDBC escape syntax, but not OK if the original statement was the one with SQL Standard syntax. To make it possible for Derby to make this distinction, I had to pass a boolean all the way from the parser to the run-time logic. The boolean indicates whether the original statement used JDBC escape syntax or SQL Standard syntax. That's what forced me to touch so many files along the way. I hope I have not made this more confusing. If this is still puzzling, please let me know. Thanks, -Rick
          Hide
          Bryan Pendleton added a comment -

          Hi Rick, Thanks for adding this new feature.

          > Many files had to be touched in order to propagate whether we want SQL Standard or JDBC behavior.

          I'm not sure I understand. Can you expand on this? Is this something that the application
          programmer chooses, one behavior or another? How do they specify it?

          Show
          Bryan Pendleton added a comment - Hi Rick, Thanks for adding this new feature. > Many files had to be touched in order to propagate whether we want SQL Standard or JDBC behavior. I'm not sure I understand. Can you expand on this? Is this something that the application programmer chooses, one behavior or another? How do they specify it?
          Hide
          Rick Hillegas added a comment -

          Committed derby-5488-05-ad-limitOffset.diff at subversion revision 1200492.

          Show
          Rick Hillegas added a comment - Committed derby-5488-05-ad-limitOffset.diff at subversion revision 1200492.
          Hide
          Rick Hillegas added a comment -

          Tests passed cleanly for me except for the 7 known trigger-related upgrade errors and the following 2 new file permission-related errors. I do not think these are related to this patch:

          1) testBasicRecovery(org.apache.derbyTesting.functionTests.tests.store.RecoveryTest)java.security.AccessControlException: access denied ("java.io.FilePermission" "<<ALL FILES>>" "execute")
          at java.security.AccessControlContext.checkPermission(AccessControlContext.java:366)
          at java.security.AccessController.checkPermission(AccessController.java:555)
          at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
          at java.lang.SecurityManager.checkExec(SecurityManager.java:799)
          at java.lang.ProcessBuilder.start(ProcessBuilder.java:1016)
          at java.lang.Runtime.exec(Runtime.java:615)
          at java.lang.Runtime.exec(Runtime.java:483)
          at org.apache.derbyTesting.junit.BaseTestCase$8.run(BaseTestCase.java:564)
          at java.security.AccessController.doPrivileged(Native Method)
          at org.apache.derbyTesting.junit.BaseTestCase.execJavaCmd(BaseTestCase.java:560)
          at org.apache.derbyTesting.junit.BaseTestCase.assertExecJavaCmdAsExpected(BaseTestCase.java:510)
          at org.apache.derbyTesting.junit.BaseTestCase.assertLaunchedJUnitTestMethod(BaseTestCase.java:864)
          at org.apache.derbyTesting.functionTests.tests.store.RecoveryTest.testBasicRecovery(RecoveryTest.java:89)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:116)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:21)
          at junit.extensions.TestSetup.run(TestSetup.java:25)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          2) doTestCliServerIsRestrictive(org.apache.derbyTesting.functionTests.tests.engine.RestrictiveFilePermissionsTest)java.security.AccessControlException: access denied ("java.io.FilePermission" "<<ALL FILES>>" "execute")
          at java.security.AccessControlContext.checkPermission(AccessControlContext.java:366)
          at java.security.AccessController.checkPermission(AccessController.java:555)
          at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
          at java.lang.SecurityManager.checkExec(SecurityManager.java:799)
          at java.lang.ProcessBuilder.start(ProcessBuilder.java:1016)
          at java.lang.Runtime.exec(Runtime.java:615)
          at java.lang.Runtime.exec(Runtime.java:483)
          at org.apache.derbyTesting.junit.NetworkServerTestSetup$3.run(NetworkServerTestSetup.java:342)
          at java.security.AccessController.doPrivileged(Native Method)
          at org.apache.derbyTesting.junit.NetworkServerTestSetup.startSeparateProcess(NetworkServerTestSetup.java:335)
          at org.apache.derbyTesting.junit.NetworkServerTestSetup.setUp(NetworkServerTestSetup.java:188)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:20)
          at junit.extensions.TestSetup.run(TestSetup.java:25)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)

          Show
          Rick Hillegas added a comment - Tests passed cleanly for me except for the 7 known trigger-related upgrade errors and the following 2 new file permission-related errors. I do not think these are related to this patch: 1) testBasicRecovery(org.apache.derbyTesting.functionTests.tests.store.RecoveryTest)java.security.AccessControlException: access denied ("java.io.FilePermission" "<<ALL FILES>>" "execute") at java.security.AccessControlContext.checkPermission(AccessControlContext.java:366) at java.security.AccessController.checkPermission(AccessController.java:555) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at java.lang.SecurityManager.checkExec(SecurityManager.java:799) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1016) at java.lang.Runtime.exec(Runtime.java:615) at java.lang.Runtime.exec(Runtime.java:483) at org.apache.derbyTesting.junit.BaseTestCase$8.run(BaseTestCase.java:564) at java.security.AccessController.doPrivileged(Native Method) at org.apache.derbyTesting.junit.BaseTestCase.execJavaCmd(BaseTestCase.java:560) at org.apache.derbyTesting.junit.BaseTestCase.assertExecJavaCmdAsExpected(BaseTestCase.java:510) at org.apache.derbyTesting.junit.BaseTestCase.assertLaunchedJUnitTestMethod(BaseTestCase.java:864) at org.apache.derbyTesting.functionTests.tests.store.RecoveryTest.testBasicRecovery(RecoveryTest.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:116) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24) at junit.extensions.TestSetup$1.protect(TestSetup.java:21) at junit.extensions.TestSetup.run(TestSetup.java:25) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) 2) doTestCliServerIsRestrictive(org.apache.derbyTesting.functionTests.tests.engine.RestrictiveFilePermissionsTest)java.security.AccessControlException: access denied ("java.io.FilePermission" "<<ALL FILES>>" "execute") at java.security.AccessControlContext.checkPermission(AccessControlContext.java:366) at java.security.AccessController.checkPermission(AccessController.java:555) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at java.lang.SecurityManager.checkExec(SecurityManager.java:799) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1016) at java.lang.Runtime.exec(Runtime.java:615) at java.lang.Runtime.exec(Runtime.java:483) at org.apache.derbyTesting.junit.NetworkServerTestSetup$3.run(NetworkServerTestSetup.java:342) at java.security.AccessController.doPrivileged(Native Method) at org.apache.derbyTesting.junit.NetworkServerTestSetup.startSeparateProcess(NetworkServerTestSetup.java:335) at org.apache.derbyTesting.junit.NetworkServerTestSetup.setUp(NetworkServerTestSetup.java:188) at junit.extensions.TestSetup$1.protect(TestSetup.java:20) at junit.extensions.TestSetup.run(TestSetup.java:25) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-05-ad-limitOffset.diff. This patch adds the JDBC LIMIT/OFFSET escape syntax, mapping it onto Derby's existing implementation of SQL Standard OFFSET/FETCH NEXT syntax. Ad-hoc experiments suggest that the patch works. The OffsetFetchNextTest passes cleanly. I will run full regression tests. Follow-on patches for tests will be needed.

          Most of the files which were touched were changed because of a difference between SQL Standard and JDBC behaviors: In the SQL Standard, the FETCH FIRST clause only lets you specify a positive number of rows to be returned--a value of 0 is supposed to raise an exception. In contrast, the JDBC escape syntax allows a LIMIT value of 0. That special value means that all rows should be returned from the OFFSET onwards. Many files had to be touched in order to propagate whether we want SQL Standard or JDBC behavior.

          Changes fall into 3 categories:

          1) Parse/bind-time changes. Parser references to the existing OFFSET and FETCH NEXT productions were replaced with a call to a new production which handles both the SQL Standard and the JDBC syntax. In addition, the constructors for various ResultSet nodes were changed in order to propagate the distinction between SQL Standard and JDBC behaviors.

          2) Code generator changes. Only one code generation method had to be touched.

          3) Run-time changes. A couple changes were necessary in order to propagate the distinction between SQL Standard and JDBC behaviors.

          Touches the following files:

          --------------

          M java/engine/org/apache/derby/impl/sql/compile/sqlgrammar.jj

          Parser changes to support the new JDBC escape syntax.

          --------------

          M java/engine/org/apache/derby/impl/sql/compile/ResultSetNode.java
          M java/engine/org/apache/derby/impl/sql/compile/FromSubquery.java
          M java/engine/org/apache/derby/impl/sql/compile/NormalizeResultSetNode.java
          M java/engine/org/apache/derby/impl/sql/compile/SelectNode.java
          M java/engine/org/apache/derby/impl/sql/compile/SubqueryNode.java
          M java/engine/org/apache/derby/impl/sql/compile/RowCountNode.java
          M java/engine/org/apache/derby/impl/sql/compile/ProjectRestrictNode.java
          M java/engine/org/apache/derby/impl/sql/compile/CursorNode.java
          M java/engine/org/apache/derby/impl/sql/compile/UnionNode.java
          M java/engine/org/apache/derby/impl/sql/compile/RowResultSetNode.java
          M java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java
          M java/engine/org/apache/derby/impl/sql/compile/InsertNode.java
          M java/engine/org/apache/derby/impl/sql/compile/SetOperatorNode.java
          M java/engine/org/apache/derby/impl/sql/compile/CreateViewNode.java
          M java/engine/org/apache/derby/impl/sql/compile/IntersectOrExceptNode.java

          Parse/bind-time changes to propagate the distinction between SQL Standard and JDBC behaviors.

          In addition, the generate() method of RowCountNode was touched for the same reason.

          --------------

          M java/engine/org/apache/derby/impl/sql/execute/GenericResultSetFactory.java
          M java/engine/org/apache/derby/impl/sql/execute/RowCountResultSet.java
          M java/engine/org/apache/derby/iapi/sql/execute/ResultSetFactory.java
          M java/engine/org/apache/derby/loc/messages.xml

          Run-time changes to handle the distinction between SQL Standard and JDBC behaviors.

          Show
          Rick Hillegas added a comment - Attaching derby-5488-05-ad-limitOffset.diff. This patch adds the JDBC LIMIT/OFFSET escape syntax, mapping it onto Derby's existing implementation of SQL Standard OFFSET/FETCH NEXT syntax. Ad-hoc experiments suggest that the patch works. The OffsetFetchNextTest passes cleanly. I will run full regression tests. Follow-on patches for tests will be needed. Most of the files which were touched were changed because of a difference between SQL Standard and JDBC behaviors: In the SQL Standard, the FETCH FIRST clause only lets you specify a positive number of rows to be returned--a value of 0 is supposed to raise an exception. In contrast, the JDBC escape syntax allows a LIMIT value of 0. That special value means that all rows should be returned from the OFFSET onwards. Many files had to be touched in order to propagate whether we want SQL Standard or JDBC behavior. Changes fall into 3 categories: 1) Parse/bind-time changes. Parser references to the existing OFFSET and FETCH NEXT productions were replaced with a call to a new production which handles both the SQL Standard and the JDBC syntax. In addition, the constructors for various ResultSet nodes were changed in order to propagate the distinction between SQL Standard and JDBC behaviors. 2) Code generator changes. Only one code generation method had to be touched. 3) Run-time changes. A couple changes were necessary in order to propagate the distinction between SQL Standard and JDBC behaviors. Touches the following files: -------------- M java/engine/org/apache/derby/impl/sql/compile/sqlgrammar.jj Parser changes to support the new JDBC escape syntax. -------------- M java/engine/org/apache/derby/impl/sql/compile/ResultSetNode.java M java/engine/org/apache/derby/impl/sql/compile/FromSubquery.java M java/engine/org/apache/derby/impl/sql/compile/NormalizeResultSetNode.java M java/engine/org/apache/derby/impl/sql/compile/SelectNode.java M java/engine/org/apache/derby/impl/sql/compile/SubqueryNode.java M java/engine/org/apache/derby/impl/sql/compile/RowCountNode.java M java/engine/org/apache/derby/impl/sql/compile/ProjectRestrictNode.java M java/engine/org/apache/derby/impl/sql/compile/CursorNode.java M java/engine/org/apache/derby/impl/sql/compile/UnionNode.java M java/engine/org/apache/derby/impl/sql/compile/RowResultSetNode.java M java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java M java/engine/org/apache/derby/impl/sql/compile/InsertNode.java M java/engine/org/apache/derby/impl/sql/compile/SetOperatorNode.java M java/engine/org/apache/derby/impl/sql/compile/CreateViewNode.java M java/engine/org/apache/derby/impl/sql/compile/IntersectOrExceptNode.java Parse/bind-time changes to propagate the distinction between SQL Standard and JDBC behaviors. In addition, the generate() method of RowCountNode was touched for the same reason. -------------- M java/engine/org/apache/derby/impl/sql/execute/GenericResultSetFactory.java M java/engine/org/apache/derby/impl/sql/execute/RowCountResultSet.java M java/engine/org/apache/derby/iapi/sql/execute/ResultSetFactory.java M java/engine/org/apache/derby/loc/messages.xml Run-time changes to handle the distinction between SQL Standard and JDBC behaviors.
          Hide
          Rick Hillegas added a comment -

          Committed derby-5488-04-aa-fixBigIntegerDecimal.diff at subversion revision 1199392. This eliminates the NPEs in ParameterMappingTest when you run it on OJEC. However, the test still raises an error because of DERBY-5497, which appears to be a bug in OJEC itself.

          Show
          Rick Hillegas added a comment - Committed derby-5488-04-aa-fixBigIntegerDecimal.diff at subversion revision 1199392. This eliminates the NPEs in ParameterMappingTest when you run it on OJEC. However, the test still raises an error because of DERBY-5497 , which appears to be a bug in OJEC itself.
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-04-aa-fixBigIntegerDecimal.diff. This patch is not ready for commit yet because there is still a problem in ParameterMappingTest on CDC/FP 1.1.

          This patch gets rid of the NPE. It does this by moving some logic out of the data factory into SQLDecimal and by implementing some more methods in the small device version of SQLDecimal, which is called BigIntegerDecimal. I don't think this is the correct long term fix. The correct long term fix is for our small device implementation to use SQLDecimal and to get rid of BigIntegerDecimal. This ought to be possible because java.math.BigDecimal is in CDC/FP 1.1 (it wasn't in the earlier version of CDC/FP on which our small device implementation was based originally). However, I think that there may be some deserialization and upgrade issues for legacy databases which have stored BigIntegerDecimals. Fixing those issues seems to me to be the province of another JIRA.

          After clearing away the current problems with ParameterMappingTest, the test trundles along and hits a new problem: a failure on small devices to resolve a procedure which has BINARY argument types. I have attached a program, z.java, which shows this problem. The program runs fine on JDK 6 but fails on OJEC.

          Show
          Rick Hillegas added a comment - Attaching derby-5488-04-aa-fixBigIntegerDecimal.diff. This patch is not ready for commit yet because there is still a problem in ParameterMappingTest on CDC/FP 1.1. This patch gets rid of the NPE. It does this by moving some logic out of the data factory into SQLDecimal and by implementing some more methods in the small device version of SQLDecimal, which is called BigIntegerDecimal. I don't think this is the correct long term fix. The correct long term fix is for our small device implementation to use SQLDecimal and to get rid of BigIntegerDecimal. This ought to be possible because java.math.BigDecimal is in CDC/FP 1.1 (it wasn't in the earlier version of CDC/FP on which our small device implementation was based originally). However, I think that there may be some deserialization and upgrade issues for legacy databases which have stored BigIntegerDecimals. Fixing those issues seems to me to be the province of another JIRA. After clearing away the current problems with ParameterMappingTest, the test trundles along and hits a new problem: a failure on small devices to resolve a procedure which has BINARY argument types. I have attached a program, z.java, which shows this problem. The program runs fine on JDK 6 but fails on OJEC.
          Hide
          Rick Hillegas added a comment -

          I have installed Oracle Java ME Embedded Client in my Ubuntu guest. I am able to reproduce the NPE you see.

          Show
          Rick Hillegas added a comment - I have installed Oracle Java ME Embedded Client in my Ubuntu guest. I am able to reproduce the NPE you see.
          Hide
          Rick Hillegas added a comment -

          Thanks for running that experiment, Knut. I will try to reconstruct my small device environment so that I can reproduce your results. Thanks.

          Show
          Rick Hillegas added a comment - Thanks for running that experiment, Knut. I will try to reconstruct my small device environment so that I can reproduce your results. Thanks.
          Hide
          Knut Anders Hatlen added a comment -

          NumberDataType.setBigDecimal() has this code:

          if ( (bdc.compareTo(NumberDataType.MINLONG_MINUS_ONE) == 1)
          && (bdc.compareTo(NumberDataType.MAXLONG_PLUS_ONE) == -1))

          { setValue(bigDecimal.longValue()); }

          else {

          However, NumberDataType.MINLONG_MINUS_ONE and NumberDataType.MAXLONG_PLUS_ONE are not initialized on CDC/FP, and we get a NullPointerException whenever we try to set a BigDecimal value.

          Show
          Knut Anders Hatlen added a comment - NumberDataType.setBigDecimal() has this code: if ( (bdc.compareTo(NumberDataType.MINLONG_MINUS_ONE) == 1) && (bdc.compareTo(NumberDataType.MAXLONG_PLUS_ONE) == -1)) { setValue(bigDecimal.longValue()); } else { However, NumberDataType.MINLONG_MINUS_ONE and NumberDataType.MAXLONG_PLUS_ONE are not initialized on CDC/FP, and we get a NullPointerException whenever we try to set a BigDecimal value.
          Hide
          Knut Anders Hatlen added a comment -

          Thanks, the changes look good to me. I thought perhaps these changes might make ParameterMappingTest work on CDC/FP now, so I enabled the test and ran it on phoneME and on Oracle Java ME Embedded Client. It still failed, but now the failures were NullPointerExceptions. For example:

          1) test_jdbc4_1_objectMappings(org.apache.derbyTesting.functionTests.tests.jdbcapi.ParameterMappingTest)java.sql.SQLException: Java exception: ': java.lang.NullPointerException'.
          at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:45)
          at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:142)
          at org.apache.derby.impl.jdbc.Util.javaException(Util.java:299)
          at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(TransactionResourceImpl.java:436)
          at org.apache.derby.impl.jdbc.EmbedResultSet.noStateChangeException(EmbedResultSet.java:4472)
          at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setBigDecimal(EmbedPreparedStatement.java:470)
          at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setObject(EmbedPreparedStatement.java:1356)
          at org.apache.derbyTesting.functionTests.tests.jdbcapi.ParameterMappingTest.test_jdbc4_1_objectMappings(ParameterMappingTest.java:958)
          at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:116)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:21)
          at junit.extensions.TestSetup.run(TestSetup.java:25)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          at sun.misc.CVM.runMain(CVM.java:555)
          Caused by: java.lang.NullPointerException
          at java.math.BigDecimal.compareTo(BigDecimal.java:788)
          at java.math.BigDecimal.compareTo(BigDecimal.java:815)
          at org.apache.derby.iapi.types.NumberDataType.setBigDecimal(NumberDataType.java:434)
          at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setBigDecimal(EmbedPreparedStatement.java:467)
          ... 22 more

          Do you see any reason why the call to setBigDecimal() should fail with a NullPointerException here?

          Show
          Knut Anders Hatlen added a comment - Thanks, the changes look good to me. I thought perhaps these changes might make ParameterMappingTest work on CDC/FP now, so I enabled the test and ran it on phoneME and on Oracle Java ME Embedded Client. It still failed, but now the failures were NullPointerExceptions. For example: 1) test_jdbc4_1_objectMappings(org.apache.derbyTesting.functionTests.tests.jdbcapi.ParameterMappingTest)java.sql.SQLException: Java exception: ': java.lang.NullPointerException'. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:45) at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:142) at org.apache.derby.impl.jdbc.Util.javaException(Util.java:299) at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(TransactionResourceImpl.java:436) at org.apache.derby.impl.jdbc.EmbedResultSet.noStateChangeException(EmbedResultSet.java:4472) at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setBigDecimal(EmbedPreparedStatement.java:470) at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setObject(EmbedPreparedStatement.java:1356) at org.apache.derbyTesting.functionTests.tests.jdbcapi.ParameterMappingTest.test_jdbc4_1_objectMappings(ParameterMappingTest.java:958) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:116) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24) at junit.extensions.TestSetup$1.protect(TestSetup.java:21) at junit.extensions.TestSetup.run(TestSetup.java:25) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at sun.misc.CVM.runMain(CVM.java:555) Caused by: java.lang.NullPointerException at java.math.BigDecimal.compareTo(BigDecimal.java:788) at java.math.BigDecimal.compareTo(BigDecimal.java:815) at org.apache.derby.iapi.types.NumberDataType.setBigDecimal(NumberDataType.java:434) at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setBigDecimal(EmbedPreparedStatement.java:467) ... 22 more Do you see any reason why the call to setBigDecimal() should fail with a NullPointerException here?
          Hide
          Rick Hillegas added a comment -

          Tests passed cleanly me for except for the trigger-related upgrade test failures. Committed at subversion revision 1197264.

          Show
          Rick Hillegas added a comment - Tests passed cleanly me for except for the trigger-related upgrade test failures. Committed at subversion revision 1197264.
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-03-ac-moveDecimalSetterGetterAndTest.diff. This moves the get/setBigDecimal() logic out of the JDBC 2.0 implementation into the JSR 169 implementation. I have added a test to verify that setObject( x, BigDecimal ) and setObject( x, BigInteger ) behave correctly for CallableStatements as well as PreparedStatements. I am running tests now.

          Touches the following files:

          ----------

          M java/engine/org/apache/derby/impl/jdbc/EmbedCallableStatement20.java
          M java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement20.java
          M java/engine/org/apache/derby/impl/jdbc/EmbedCallableStatement.java
          M java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java
          M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/CallableTest.java

          Show
          Rick Hillegas added a comment - Attaching derby-5488-03-ac-moveDecimalSetterGetterAndTest.diff. This moves the get/setBigDecimal() logic out of the JDBC 2.0 implementation into the JSR 169 implementation. I have added a test to verify that setObject( x, BigDecimal ) and setObject( x, BigInteger ) behave correctly for CallableStatements as well as PreparedStatements. I am running tests now. Touches the following files: ---------- M java/engine/org/apache/derby/impl/jdbc/EmbedCallableStatement20.java M java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement20.java M java/engine/org/apache/derby/impl/jdbc/EmbedCallableStatement.java M java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/CallableTest.java
          Hide
          Rick Hillegas added a comment -

          Committed derby-5488-02-aa-fixBigInteger.diff at subversion revision 1197172.

          Show
          Rick Hillegas added a comment - Committed derby-5488-02-aa-fixBigInteger.diff at subversion revision 1197172.
          Hide
          Rick Hillegas added a comment -

          I saw lots of errors in the regression tests but they do not seem to be related to this patch. The errors were of two types:

          1) The pre-existing trigger-related errors in the upgrade tests.

          2) Problems in spawning another JVM in the SecureServerTests and the Replication tests. I think that something has changed in our JVM-spawning logic which breaks on my preview JDK 7 on the Mac. I will log a JIRA to track this.

          Show
          Rick Hillegas added a comment - I saw lots of errors in the regression tests but they do not seem to be related to this patch. The errors were of two types: 1) The pre-existing trigger-related errors in the upgrade tests. 2) Problems in spawning another JVM in the SecureServerTests and the Replication tests. I think that something has changed in our JVM-spawning logic which breaks on my preview JDK 7 on the Mac. I will log a JIRA to track this.
          Hide
          Rick Hillegas added a comment -

          Thanks, Knut. I will look into moving the logic out of EmbedPreparedStatement20 into EmbedPreparedStatement.

          Show
          Rick Hillegas added a comment - Thanks, Knut. I will look into moving the logic out of EmbedPreparedStatement20 into EmbedPreparedStatement.
          Hide
          Knut Anders Hatlen added a comment -

          If that works, it sounds reasonable (and more consistent). If not, I think it's fine to leave it as it is. Since EmbedPreparedStatement169 does not implement setBigDecimal(), some more changes may be necessary.

          By the way, does setObject(BigInteger) work on CallableStatement now? If we use the setObjectConvert() override, I think we need an override in EmbedCallableStatement20 too. If we push the logic down to EmbedPreparedStatement again, it should probably work automatically. Might be worth a test case to verify that both PreparedStatement and CallableStatement work correctly, though.

          Show
          Knut Anders Hatlen added a comment - If that works, it sounds reasonable (and more consistent). If not, I think it's fine to leave it as it is. Since EmbedPreparedStatement169 does not implement setBigDecimal(), some more changes may be necessary. By the way, does setObject(BigInteger) work on CallableStatement now? If we use the setObjectConvert() override, I think we need an override in EmbedCallableStatement20 too. If we push the logic down to EmbedPreparedStatement again, it should probably work automatically. Might be worth a test case to verify that both PreparedStatement and CallableStatement work correctly, though.
          Hide
          Rick Hillegas added a comment -

          Hi Knut,

          This is my understanding of the behavior of setObject( int, BigInteger ) on CDC/FP 1.1:

          1) It does not work today.

          2) It did work in the first rev of the patch.

          3) It does not work in the second patch.

          Since the behavior of setObject( int, BigInteger ) depends on the behavior of setObject( int, BigDecimal ) in this implementation, I moved the real logic into EmbedPreparedStatement20.setObjectConvert(), where BigDecimal is handled. It seems to me that the logic in that method could be moved into EmbedPreparedStatement.setObject(). Then both setObject( int, BigDecimal ) and setObject( int, BigInteger ) would work on CDC/FP 1.1.

          I could do that in a follow-on patch. Does that sound reasonable?

          Thanks,
          -Rick

          Show
          Rick Hillegas added a comment - Hi Knut, This is my understanding of the behavior of setObject( int, BigInteger ) on CDC/FP 1.1: 1) It does not work today. 2) It did work in the first rev of the patch. 3) It does not work in the second patch. Since the behavior of setObject( int, BigInteger ) depends on the behavior of setObject( int, BigDecimal ) in this implementation, I moved the real logic into EmbedPreparedStatement20.setObjectConvert(), where BigDecimal is handled. It seems to me that the logic in that method could be moved into EmbedPreparedStatement.setObject(). Then both setObject( int, BigDecimal ) and setObject( int, BigInteger ) would work on CDC/FP 1.1. I could do that in a follow-on patch. Does that sound reasonable? Thanks, -Rick
          Hide
          Knut Anders Hatlen added a comment -

          Thanks, Rick. If I understand the new patch correctly, setObject(x, new BigInteger("123")) will stop working on CDC/FP because the conversion code is moved from EmbedPreparedStatement to EmbedPreparedStatement20, right? That's probably fine, since this conversion is not defined by JSR-169, just wanted to confirm that I've understood correctly.

          Show
          Knut Anders Hatlen added a comment - Thanks, Rick. If I understand the new patch correctly, setObject(x, new BigInteger("123")) will stop working on CDC/FP because the conversion code is moved from EmbedPreparedStatement to EmbedPreparedStatement20, right? That's probably fine, since this conversion is not defined by JSR-169, just wanted to confirm that I've understood correctly.
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-02-aa-fixBigInteger.diff. This patch corrects the behavior of setObject( int, BigInteger ). I will run tests.

          The overflow/underflow/truncation behavior of setObject() is not clearly documented by the JDBC spec. Lance Andersen thinks that this might be addressed in JDBC 4.2, the next rev of the spec which will accompany Java 8.

          In the meantime, I have let the following principles guide the revised implementation of setObject( int, BigInteger ):

          1) Overflow/underflow/truncation should behave as it does for other numeric objects.

          2) BigInteger should not be less capable than the corresponding BigDecimal for the same integer value.

          Touches the following files:

          ----------

          M java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement20.java
          M java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java

          Fix for embedded JDBC driver.

          ----------

          M java/client/org/apache/derby/client/am/CrossConverters.java
          M java/client/org/apache/derby/client/am/PreparedStatement.java

          Fix for network JDBC driver.

          ----------

          M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/ParameterMappingTest.java

          Additional test case to verify overflow/underflow/truncation behavior.

          Show
          Rick Hillegas added a comment - Attaching derby-5488-02-aa-fixBigInteger.diff. This patch corrects the behavior of setObject( int, BigInteger ). I will run tests. The overflow/underflow/truncation behavior of setObject() is not clearly documented by the JDBC spec. Lance Andersen thinks that this might be addressed in JDBC 4.2, the next rev of the spec which will accompany Java 8. In the meantime, I have let the following principles guide the revised implementation of setObject( int, BigInteger ): 1) Overflow/underflow/truncation should behave as it does for other numeric objects. 2) BigInteger should not be less capable than the corresponding BigDecimal for the same integer value. Touches the following files: ---------- M java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement20.java M java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java Fix for embedded JDBC driver. ---------- M java/client/org/apache/derby/client/am/CrossConverters.java M java/client/org/apache/derby/client/am/PreparedStatement.java Fix for network JDBC driver. ---------- M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/ParameterMappingTest.java Additional test case to verify overflow/underflow/truncation behavior.
          Hide
          Rick Hillegas added a comment -

          Committed derby-5488-01-aa-objectMappingAndConversion.diff at subversion revision 1196680.

          Show
          Rick Hillegas added a comment - Committed derby-5488-01-aa-objectMappingAndConversion.diff at subversion revision 1196680.
          Hide
          Rick Hillegas added a comment -

          Thanks for taking a look at the patch, Knut. I think I will check it in as is and then we can sand down the behavior of setObject( BigInteger ) in a follow-on patch. I will post some thoughts after consulting the experts.

          Show
          Rick Hillegas added a comment - Thanks for taking a look at the patch, Knut. I think I will check it in as is and then we can sand down the behavior of setObject( BigInteger ) in a follow-on patch. I will post some thoughts after consulting the experts.
          Hide
          Knut Anders Hatlen added a comment -

          + } else if (source instanceof java.math.BigInteger)

          { + return setObject(targetType, ((java.math.BigInteger) source).longValue() ); + }

          else if (x instanceof java.math.BigInteger) {
          + setLong(parameterIndex, ((java.math.BigInteger) x).longValue() );

          What if the BigInteger contains a number greater than Long.MAX_VALUE or less than Long.MIN_VALUE? Should we convert it to a BigDecimal instead of a Long in that case?

          Show
          Knut Anders Hatlen added a comment - + } else if (source instanceof java.math.BigInteger) { + return setObject(targetType, ((java.math.BigInteger) source).longValue() ); + } else if (x instanceof java.math.BigInteger) { + setLong(parameterIndex, ((java.math.BigInteger) x).longValue() ); What if the BigInteger contains a number greater than Long.MAX_VALUE or less than Long.MIN_VALUE? Should we convert it to a BigDecimal instead of a Long in that case?
          Hide
          Rick Hillegas added a comment -

          Tests passed for me except for the trigger-related errors in the upgrade tests we have been seeing on the trunk for a while.

          Show
          Rick Hillegas added a comment - Tests passed for me except for the trigger-related errors in the upgrade tests we have been seeing on the trunk for a while.
          Hide
          Rick Hillegas added a comment -

          Attaching derby-5488-01-aa-objectMappingAndConversion.diff. This patch implements the new object mappings and conversions described by the functional spec. I will run full regression tests.

          Touches the following files:

          ------------

          M java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java

          Adds new mappings and conversions to the embedded JDBC driver.

          ------------

          M java/client/org/apache/derby/client/am/PreparedStatement.java
          M java/client/org/apache/derby/client/am/CrossConverters.java

          Adds new mappings and conversions to the network JDBC driver.

          ------------

          M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/ParameterMappingTest.java

          Adds new tests to verify the mappings and conversions.

          Show
          Rick Hillegas added a comment - Attaching derby-5488-01-aa-objectMappingAndConversion.diff. This patch implements the new object mappings and conversions described by the functional spec. I will run full regression tests. Touches the following files: ------------ M java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java Adds new mappings and conversions to the embedded JDBC driver. ------------ M java/client/org/apache/derby/client/am/PreparedStatement.java M java/client/org/apache/derby/client/am/CrossConverters.java Adds new mappings and conversions to the network JDBC driver. ------------ M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/ParameterMappingTest.java Adds new tests to verify the mappings and conversions.
          Hide
          Rick Hillegas added a comment -

          Thanks for helping me puzzle through these issues, Kristian. The default behavior of these methods is to measure positions and lengths in characters (I have just confirmed this with JDBC spec lead Lance Andersen). That's what Derby does already and that seems to me to be the portable usage. What the new, optional arguments let you do is ask the database to measure positions and lengths in octets. I think that octet-lengths are not likely to be portable. That is because the octet-lengths will vary depending on whether the engine code is written in Java or C. For C databases, octet-lengths will vary depending on the default encoding for the engine. I think we have a good portability story here. Thanks.

          Show
          Rick Hillegas added a comment - Thanks for helping me puzzle through these issues, Kristian. The default behavior of these methods is to measure positions and lengths in characters (I have just confirmed this with JDBC spec lead Lance Andersen). That's what Derby does already and that seems to me to be the portable usage. What the new, optional arguments let you do is ask the database to measure positions and lengths in octets. I think that octet-lengths are not likely to be portable. That is because the octet-lengths will vary depending on whether the engine code is written in Java or C. For C databases, octet-lengths will vary depending on the default encoding for the engine. I think we have a good portability story here. Thanks.
          Hide
          Kristian Waagan added a comment -

          I was thinking about the case where you use the escape syntax to specify that you want to use character lengths. If this is added to make database X, which also supports octets, use character lengths, will the application work if database X is replaced with Derby? Or will Derby choke on that optional argument?

          I'm not saying we should change whatever we have in Derby, I'm trying to understand the expected behavior.

          Show
          Kristian Waagan added a comment - I was thinking about the case where you use the escape syntax to specify that you want to use character lengths. If this is added to make database X, which also supports octets, use character lengths, will the application work if database X is replaced with Derby? Or will Derby choke on that optional argument? I'm not saying we should change whatever we have in Derby, I'm trying to understand the expected behavior.
          Hide
          Rick Hillegas added a comment -

          Hi Kristian,

          Can you clarify what portability issues you are concerned about? It seems to me that octets are only applicable to languages (like C) which represent strings as arrays of 8-bit bytes.

          Thanks,
          -Rick

          Show
          Rick Hillegas added a comment - Hi Kristian, Can you clarify what portability issues you are concerned about? It seems to me that octets are only applicable to languages (like C) which represent strings as arrays of 8-bit bytes. Thanks, -Rick
          Hide
          Kristian Waagan added a comment -

          Hi Rick,

          You say the "New String Function Syntax" is meaningless for Derby. Is this true with respect to portability too?
          I know I'm being lazy, but I hope you already know the spec well enough to answer the question

          Show
          Kristian Waagan added a comment - Hi Rick, You say the "New String Function Syntax" is meaningless for Derby. Is this true with respect to portability too? I know I'm being lazy, but I hope you already know the spec well enough to answer the question
          Hide
          Rick Hillegas added a comment -

          Attaching a functional spec for the additional bits: JDBC_4.1_Supplement.html

          Show
          Rick Hillegas added a comment - Attaching a functional spec for the additional bits: JDBC_4.1_Supplement.html

            People

            • Assignee:
              Rick Hillegas
              Reporter:
              Rick Hillegas
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development