Uploaded image for project: 'Derby'
  1. Derby
  2. DERBY-2892

Closing a resultset after retrieving a large > 32665 bytes value with Network Server does not release locks

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 10.2.2.0, 10.3.1.4, 10.3.2.1
    • Fix Version/s: 10.3.3.0, 10.4.1.3, 10.5.1.1
    • Component/s: SQL
    • Labels:
      None
    • Environment:
      JDK: build 1.6.0_01-b06 (WinXP & Gentoo/SuSE)
      Hardware: Intel x86
      Client/Server environment
    • Urgency:
      Normal
    • Bug behavior facts:
      Regression

      Description

      This is the same issue as DERBY-255 (https://issues.apache.org/jira/browse/DERBY-255). The test attached to DERBY-255 shows the locks being not released. Everything is fine when using Derby 10.1.3.1 .

      I would think it's a regression bug.

      Output from sysinfo:

      ------------------ Java-Informationen ------------------
      Java-Version: 1.6.0_01
      Java-Anbieter: Sun Microsystems Inc.
      Java-Home: C:\work\applications\development\java\jdk1.6u1-SE\jre
      Java-Klassenpfad: C:\work\applications\development\derby-10.2.2.0/lib/derby.jar;C:\work\applications\development\derby-
      0.2.2.0/lib/derbynet.jar;C:\work\applications\development\derby-10.2.2.0/lib/derbyclient.jar;C:\work\applications\devel
      pment\derby-10.2.2.0/lib/derbytools.jar
      Name des Betriebssystems: Windows XP
      Architektur des Betriebssystems: x86
      Betriebssystemversion: 5.1
      Java-Benutzername: thomas.niessen
      Java-Benutzerausgangsverzeichnis: C:\Dokumente und Einstellungen\thomas.niessen
      Java-Benutzerverzeichnis: C:\work\applications\development\derby-10.2.2.0
      java.specification.name: Java Platform API Specification
      java.specification.version: 1.6
      --------- Derby-Informationen --------
      JRE - JDBC: Java SE 6 - JDBC 4.0
      [C:\work\applications\development\derby-10.2.2.0\lib\derby.jar] 10.2.2.0 - (485682)
      [C:\work\applications\development\derby-10.2.2.0\lib\derbytools.jar] 10.2.2.0 - (485682)
      [C:\work\applications\development\derby-10.2.2.0\lib\derbynet.jar] 10.2.2.0 - (485682)
      [C:\work\applications\development\derby-10.2.2.0\lib\derbyclient.jar] 10.2.2.0 - (485682)
      ------------------------------------------------------
      ----------------- Informationen zur Lõndereinstellung -----------------
      Aktuelle Lõndereinstellung: [Deutsch/Deutschland [de_DE]]
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [cs]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [de_DE]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [es]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [fr]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [hu]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [it]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [ja_JP]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [ko_KR]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [pl]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [pt_BR]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [ru]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [zh_CN]
      Version: 10.2.2.0 - (485682)
      Es wurde Unterst³tzung f³r die folgende Lõndereinstellung gefunden: [zh_TW]
      Version: 10.2.2.0 - (485682)

      1. DERBY-2892_07_10_07_try1_diff.txt
        15 kB
        Kathey Marsden
      2. DERBY-2892_07_10_07_try1_stat.txt
        0.4 kB
        Kathey Marsden
      3. DERBY-2892_07_13_07_try2_diff.txt
        14 kB
        Kathey Marsden
      4. DERBY-2892_07_13_07_try2_stat.txt
        0.7 kB
        Kathey Marsden
      5. derby-2892_10.3_try1_diff.txt
        26 kB
        Kathey Marsden
      6. derby-2892.diff
        19 kB
        Øystein Grøvlen
      7. derby-2892-1a-alternative_fix_partial.diff
        9 kB
        Kristian Waagan
      8. derby-2892-1b-alternative_fix_partial.diff
        9 kB
        Kristian Waagan
      9. derby-2892-2a-alternative_fix_cleanup.diff
        6 kB
        Kristian Waagan
      10. derby-2892firstshot.diff
        4 kB
        Øystein Grøvlen
      11. protocolErrorRepro.zip
        181 kB
        Kathey Marsden

        Issue Links

        There are no Sub-Tasks for this issue.

          Activity

          Hide
          kmarsden Kathey Marsden added a comment -

          I verified that this is a regression in 10.2 and still exists in the trunk.

          Kathey

          Show
          kmarsden Kathey Marsden added a comment - I verified that this is a regression in 10.2 and still exists in the trunk. Kathey
          Hide
          kmarsden Kathey Marsden added a comment -

          Linking to DERBY-255 since this is a regression of that fix.

          Show
          kmarsden Kathey Marsden added a comment - Linking to DERBY-255 since this is a regression of that fix.
          Hide
          kmarsden Kathey Marsden added a comment -

          It looks like this regressed with the checkin of DERBY-326 (svn 405037). This is when the lock timeout was reintroduced to the client canon for blobclob4blob
          179014 kmarsden START: clobTest92
          405037 tmnk FAIL – unexpected exception ****************
          405037 tmnk SQLSTATE(40XL1): A lock could not be obtained within the time requested

          Show
          kmarsden Kathey Marsden added a comment - It looks like this regressed with the checkin of DERBY-326 (svn 405037). This is when the lock timeout was reintroduced to the client canon for blobclob4blob 179014 kmarsden START: clobTest92 405037 tmnk FAIL – unexpected exception **************** 405037 tmnk SQLSTATE(40XL1): A lock could not be obtained within the time requested
          Hide
          kmarsden Kathey Marsden added a comment -

          Changing to critical since this is a regression that is likely to be hit by users.

          Show
          kmarsden Kathey Marsden added a comment - Changing to critical since this is a regression that is likely to be hit by users.
          Hide
          kmarsden Kathey Marsden added a comment -

          Thomas, what JDBC API call are you using? I see the test clobTest92 is doing a getClob() which legitimately holds locks. It is ResultSet.getCharacterStream(), ResultSet.getString(), ResultSet.getBinaryStream() and ResultSet.getBytes() that should not hold locks after the ResultSet is closed. For some reason today when I run the repro for DERBY-255, I'm not seeing the locks being held although I was sure I saw it yesterday so I am getting a bit confused.

          Show
          kmarsden Kathey Marsden added a comment - Thomas, what JDBC API call are you using? I see the test clobTest92 is doing a getClob() which legitimately holds locks. It is ResultSet.getCharacterStream(), ResultSet.getString(), ResultSet.getBinaryStream() and ResultSet.getBytes() that should not hold locks after the ResultSet is closed. For some reason today when I run the repro for DERBY-255 , I'm not seeing the locks being held although I was sure I saw it yesterday so I am getting a bit confused.
          Hide
          kmarsden Kathey Marsden added a comment -

          I checked in a test jdbcapi/LargeDataLocksTest.java
          Date: Fri Jul 6 15:11:12 2007
          New Revision: 554073

          URL: http://svn.apache.org/viewvc?view=rev&rev=554073

          to add test coverage for ResultSet.getCharacterStream(), ResultSet.getString(), ResultSet.getBinaryStream() and ResultSet.getBytes() with large values to make sure locks are not being held. I am beginning to think that I may have panicked when seeing this regression filed. I can't seem to reproduce now. Anyway I will leave this open until we hear from Thomas. Perhaps we can get a change to the new test that shows a regression or perhaps he is just using getBlob()/getClob() and locks are expected to stick around until the end of the transaction.

          Kathey

          Show
          kmarsden Kathey Marsden added a comment - I checked in a test jdbcapi/LargeDataLocksTest.java Date: Fri Jul 6 15:11:12 2007 New Revision: 554073 URL: http://svn.apache.org/viewvc?view=rev&rev=554073 to add test coverage for ResultSet.getCharacterStream(), ResultSet.getString(), ResultSet.getBinaryStream() and ResultSet.getBytes() with large values to make sure locks are not being held. I am beginning to think that I may have panicked when seeing this regression filed. I can't seem to reproduce now. Anyway I will leave this open until we hear from Thomas. Perhaps we can get a change to the new test that shows a regression or perhaps he is just using getBlob()/getClob() and locks are expected to stick around until the end of the transaction. Kathey
          Hide
          kmarsden Kathey Marsden added a comment -

          Apparently I had an older server running on my machine. I am now able to reproduce and the DERBY-255 repro on 10.3 and the LargeDataLocksTest fails on trunk. I disabled this test for client until this bug is fixed.

          Show
          kmarsden Kathey Marsden added a comment - Apparently I had an older server running on my machine. I am now able to reproduce and the DERBY-255 repro on 10.3 and the LargeDataLocksTest fails on trunk. I disabled this test for client until this bug is fixed.
          Hide
          oysteing Øystein Grøvlen added a comment -

          Kathey's comment seems to imply that getClob should cause the CLOB data to be locked while getCharacterStream should not. Is this behavior defined by the JDBC or SQL standards? It does not seem obvious to me that a Clob object representing a CLOB in the database should have different stability requirements from an InputStream object on the same CLOB.

          Show
          oysteing Øystein Grøvlen added a comment - Kathey's comment seems to imply that getClob should cause the CLOB data to be locked while getCharacterStream should not. Is this behavior defined by the JDBC or SQL standards? It does not seem obvious to me that a Clob object representing a CLOB in the database should have different stability requirements from an InputStream object on the same CLOB.
          Hide
          kmarsden Kathey Marsden added a comment -

          Clob and Blob explicitly state that they are valid for the duration of the transaction in which they are created.

          getBinaryStream says:
          "Note: All the data in the returned stream must be read prior to getting the value of any other column. The next call to a getter method implicitly closes the stream", implying that the stream is much shorter lived.

          getCharacterStream does not say one way or another, but I assume it is parallel with getBinaryStream().

          Network server does not know the difference between what was called in JDBC, getBlob() vs getBinaryStream() vs getBytes() for example. Prior to the fix for DERBY-326, Network Server would always call getBinaryStream() instead of getBlob and getCharacterStream() rather than getClob(). Since the lobs were materialized on the client we could still preserve the Blobs and Clobs until the end of the transaction. I am not sure how this might be different now with the locator work and how changing back to getBinaryStream() getCharacterStream() calls would impact that.

          Show
          kmarsden Kathey Marsden added a comment - Clob and Blob explicitly state that they are valid for the duration of the transaction in which they are created. getBinaryStream says: "Note: All the data in the returned stream must be read prior to getting the value of any other column. The next call to a getter method implicitly closes the stream", implying that the stream is much shorter lived. getCharacterStream does not say one way or another, but I assume it is parallel with getBinaryStream(). Network server does not know the difference between what was called in JDBC, getBlob() vs getBinaryStream() vs getBytes() for example. Prior to the fix for DERBY-326 , Network Server would always call getBinaryStream() instead of getBlob and getCharacterStream() rather than getClob(). Since the lobs were materialized on the client we could still preserve the Blobs and Clobs until the end of the transaction. I am not sure how this might be different now with the locator work and how changing back to getBinaryStream() getCharacterStream() calls would impact that.
          Hide
          kmarsden Kathey Marsden added a comment -

          There seem to be additional complications in 10.3 vs 10.2. In 10.3 the getString and getCharacterStream calls request a lob locator which does a getObject (getBlob/getClob())

          As for a 10.2 fix I played with changing the Blob.getBinaryStream and Clob.getCharacterStream calls to ResultSet.getBinaryStream and ResultSet.getCharacterStream. The trouble is that the current implementation makes these calls twice, which is not allowed. Once to see if the stream is empty and then again with this code in
          EXTDTAInputStream:

          boolean exist = is.read() > -1;

          is.close();
          is = null;

          if(exist)

          { openInputStreamAgain(rs,column,ndrdaType); }
          Show
          kmarsden Kathey Marsden added a comment - There seem to be additional complications in 10.3 vs 10.2. In 10.3 the getString and getCharacterStream calls request a lob locator which does a getObject (getBlob/getClob()) As for a 10.2 fix I played with changing the Blob.getBinaryStream and Clob.getCharacterStream calls to ResultSet.getBinaryStream and ResultSet.getCharacterStream. The trouble is that the current implementation makes these calls twice, which is not allowed. Once to see if the stream is empty and then again with this code in EXTDTAInputStream: boolean exist = is.read() > -1; is.close(); is = null; if(exist) { openInputStreamAgain(rs,column,ndrdaType); }
          Hide
          oysteing Øystein Grøvlen added a comment -

          Kathey Marsden (JIRA) wrote:
          >
          > Clob and Blob explicitly state that they are valid for the duration
          > of the transaction in which they are created.

          While I have been aware of this, I have not interpreted this to mean
          that the value could not change, just that it should be possible to
          use it to access the underlying LOB column. However, I think you
          right that in the context of result sets one would expect the LOB
          object to represent a LOB value, not a LOB column.

          >
          > getBinaryStream says:
          > "Note: All the data in the returned stream must be read prior to
          > getting the value of any other column. The next call to a getter
          > method implicitly closes the stream", implying that the stream is
          > much shorter lived.

          Thank you for making you aware of this. I guess I should have studied
          the API doc better. Is the "implicitly closes" part implemented by
          Derby? I can not remember to have seen any code for this.

          >
          > getCharacterStream does not say one way or another, but I assume it
          > is parallel with getBinaryStream().

          getAsciiStream says the same as getBinaryStream().

          >
          > Network server does not know the difference between what was called
          > in JDBC, getBlob() vs getBinaryStream() vs getBytes() for example.
          > Prior to the fix for DERBY-326, Network Server would always call
          > getBinaryStream() instead of getBlob and getCharacterStream() rather
          > than getClob(). Since the lobs were materialized on the client we
          > could still preserve the Blobs and Clobs until the end of the
          > transaction. I am not sure how this might be different now with the
          > locator work and how changing back to getBinaryStream()
          > getCharacterStream() calls would impact that.

          With locators all operations in the network server are performed on
          Blob/Clob objects. For example, InputStream.read will map to
          Blob.getBytes on the server. Hence, I would assume that locks will be
          held. I have not checked, but this may now be true even for the
          embedded driver. I see two possible solutions to this problem:

          1. Change how the locking is done. Maybe one could provide away to
          release locks when they are no longer needed.
          2. Make a copy of the LOB value and allow concurrent updates. In
          10.3 this is now possible since there is a mechanism for making
          temporary copies of LOBs. In order for this to be efficient, we
          should only make a copy when necessary. Hence, a copy-on-write
          mechanism would be useful.

          Another thing:

          I noticed that in Ole's LOB compatibility testing for 10.3 that the
          10.3 version of the BlobClob4BlobTest, a locking test failed when
          running a 10.3 client with a 10.1 server (10.3 client with 10.2 server
          went OK). Does this mean that the test has been updated to accept
          this faulty behavior?

          Show
          oysteing Øystein Grøvlen added a comment - Kathey Marsden (JIRA) wrote: > > Clob and Blob explicitly state that they are valid for the duration > of the transaction in which they are created. While I have been aware of this, I have not interpreted this to mean that the value could not change, just that it should be possible to use it to access the underlying LOB column. However, I think you right that in the context of result sets one would expect the LOB object to represent a LOB value, not a LOB column. > > getBinaryStream says: > "Note: All the data in the returned stream must be read prior to > getting the value of any other column. The next call to a getter > method implicitly closes the stream", implying that the stream is > much shorter lived. Thank you for making you aware of this. I guess I should have studied the API doc better. Is the "implicitly closes" part implemented by Derby? I can not remember to have seen any code for this. > > getCharacterStream does not say one way or another, but I assume it > is parallel with getBinaryStream(). getAsciiStream says the same as getBinaryStream(). > > Network server does not know the difference between what was called > in JDBC, getBlob() vs getBinaryStream() vs getBytes() for example. > Prior to the fix for DERBY-326 , Network Server would always call > getBinaryStream() instead of getBlob and getCharacterStream() rather > than getClob(). Since the lobs were materialized on the client we > could still preserve the Blobs and Clobs until the end of the > transaction. I am not sure how this might be different now with the > locator work and how changing back to getBinaryStream() > getCharacterStream() calls would impact that. With locators all operations in the network server are performed on Blob/Clob objects. For example, InputStream.read will map to Blob.getBytes on the server. Hence, I would assume that locks will be held. I have not checked, but this may now be true even for the embedded driver. I see two possible solutions to this problem: 1. Change how the locking is done. Maybe one could provide away to release locks when they are no longer needed. 2. Make a copy of the LOB value and allow concurrent updates. In 10.3 this is now possible since there is a mechanism for making temporary copies of LOBs. In order for this to be efficient, we should only make a copy when necessary. Hence, a copy-on-write mechanism would be useful. Another thing: I noticed that in Ole's LOB compatibility testing for 10.3 that the 10.3 version of the BlobClob4BlobTest, a locking test failed when running a 10.3 client with a 10.1 server (10.3 client with 10.2 server went OK). Does this mean that the test has been updated to accept this faulty behavior?
          Hide
          kmarsden Kathey Marsden added a comment -

          rev 405037 updated the blobclob4BLOB test to accept the timeout.

          Show
          kmarsden Kathey Marsden added a comment - rev 405037 updated the blobclob4BLOB test to accept the timeout.
          Hide
          kmarsden Kathey Marsden added a comment -

          I just thought I would give an update on my 10.2 work on this issue. I tried switching EXTDTAInputStream to use ResultSet.getCharacterStream() and ResultSet.getBinaryStream. Then instead of opening the stream again as was done with the Clob/Blob, I wrapped the binaryinputstream in a BufferedInputStream() so I could use mark() , read the first byte and then reset(). All of this worked for the repro for DERBY-255 but has caused protocol errors in the blobclob4BLOB test. Those I am still debugging. If anyone is interested in looking at the patch so far, I can attach it, otherwise I will just continue to work on it.

          Show
          kmarsden Kathey Marsden added a comment - I just thought I would give an update on my 10.2 work on this issue. I tried switching EXTDTAInputStream to use ResultSet.getCharacterStream() and ResultSet.getBinaryStream. Then instead of opening the stream again as was done with the Clob/Blob, I wrapped the binaryinputstream in a BufferedInputStream() so I could use mark() , read the first byte and then reset(). All of this worked for the repro for DERBY-255 but has caused protocol errors in the blobclob4BLOB test. Those I am still debugging. If anyone is interested in looking at the patch so far, I can attach it, otherwise I will just continue to work on it.
          Hide
          kmarsden Kathey Marsden added a comment -

          I am attaching a patch C:/kmarsden/patches/DERBY-2892_07_10_07_try1_diff.txt. THIS PATCH IS NOT FOR COMMIT. It is my attempt so far to fix this issue in 10.2. It does indeed resolve the problem with the locks, The new test jdbcapi/LargeDataLocks.java verifies the new locking behaviour, but causes a serious protocol error in some instances retrieving LOB values. I have been staring at it for a long time and can't figure out what is wrong, so am hoping Bryan or other protocol expert can take a quick look and see perhaps something I have not thus far.

          I will attach the repro for the protocol error. Simply reversing the column order in the repro eliminates the error and comparing the old and new traces I don't see what could be causing the client to think it has hit the end of stream. See the repro for more details and my previous comment for information on how the change was implemented.

          Show
          kmarsden Kathey Marsden added a comment - I am attaching a patch C:/kmarsden/patches/ DERBY-2892 _07_10_07_try1_diff.txt. THIS PATCH IS NOT FOR COMMIT. It is my attempt so far to fix this issue in 10.2. It does indeed resolve the problem with the locks, The new test jdbcapi/LargeDataLocks.java verifies the new locking behaviour, but causes a serious protocol error in some instances retrieving LOB values. I have been staring at it for a long time and can't figure out what is wrong, so am hoping Bryan or other protocol expert can take a quick look and see perhaps something I have not thus far. I will attach the repro for the protocol error. Simply reversing the column order in the repro eliminates the error and comparing the old and new traces I don't see what could be causing the client to think it has hit the end of stream. See the repro for more details and my previous comment for information on how the change was implemented.
          Hide
          kmarsden Kathey Marsden added a comment -

          here is the java program and database to reproduce the protocol error. It is an exception from the client that it has reached eof, but debugging the server, I see that the server is queueing up the EXTDTA to send, but does not send it because the client disconnects. Why the client disconnects and thinks it is at the end of data stream I cannot figure out.

          Note: If you run this twice in a row the second time you will get a protocol error on the connection.

          [C:/kmarsden/repro/derby-2892] java TestDERBY2892
          java.sql.SQLException: Insufficient data while reading from the network - expected a minimum of 6 bytes and received onl
          y -1 bytes. The connection has been terminated.
          at org.apache.derby.client.am.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:46)
          at org.apache.derby.client.am.SqlException.getSQLException(SqlException.java:346)
          at org.apache.derby.client.am.ResultSet.next(ResultSet.java:278)
          at TestDERBY2892.clobTest0(TestDERBY2892.java:37)
          at TestDERBY2892.main(TestDERBY2892.java:17)
          Caused by: org.apache.derby.client.am.DisconnectException: Insufficient data while reading from the network - expected a
          minimum of 6 bytes and received only -1 bytes. The connection has been terminated.
          at org.apache.derby.client.net.Reply.fill(Reply.java:195)
          at org.apache.derby.client.net.Reply.ensureALayerDataInBuffer(Reply.java:215)
          at org.apache.derby.client.net.Reply.readDssHeader(Reply.java:317)
          at org.apache.derby.client.net.Reply.peekCodePoint(Reply.java:1008)
          at org.apache.derby.client.net.NetResultSetReply.parseCNTQRYreply(NetResultSetReply.java:133)
          at org.apache.derby.client.net.NetResultSetReply.readFetch(NetResultSetReply.java:42)
          at org.apache.derby.client.net.ResultSetReply.readFetch(ResultSetReply.java:41)
          at org.apache.derby.client.net.NetResultSet.readFetch_(NetResultSet.java:206)
          at org.apache.derby.client.am.ResultSet.flowFetch(ResultSet.java:4138)
          at org.apache.derby.client.net.NetCursor.getMoreData_(NetCursor.java:1183)
          at org.apache.derby.client.am.Cursor.stepNext(Cursor.java:176)
          at org.apache.derby.client.am.Cursor.next(Cursor.java:195)
          at org.apache.derby.client.am.ResultSet.nextX(ResultSet.java:299)
          at org.apache.derby.client.am.ResultSet.next(ResultSet.java:269)
          ... 2 more

          Show
          kmarsden Kathey Marsden added a comment - here is the java program and database to reproduce the protocol error. It is an exception from the client that it has reached eof, but debugging the server, I see that the server is queueing up the EXTDTA to send, but does not send it because the client disconnects. Why the client disconnects and thinks it is at the end of data stream I cannot figure out. Note: If you run this twice in a row the second time you will get a protocol error on the connection. [C:/kmarsden/repro/derby-2892] java TestDERBY2892 java.sql.SQLException: Insufficient data while reading from the network - expected a minimum of 6 bytes and received onl y -1 bytes. The connection has been terminated. at org.apache.derby.client.am.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:46) at org.apache.derby.client.am.SqlException.getSQLException(SqlException.java:346) at org.apache.derby.client.am.ResultSet.next(ResultSet.java:278) at TestDERBY2892.clobTest0(TestDERBY2892.java:37) at TestDERBY2892.main(TestDERBY2892.java:17) Caused by: org.apache.derby.client.am.DisconnectException: Insufficient data while reading from the network - expected a minimum of 6 bytes and received only -1 bytes. The connection has been terminated. at org.apache.derby.client.net.Reply.fill(Reply.java:195) at org.apache.derby.client.net.Reply.ensureALayerDataInBuffer(Reply.java:215) at org.apache.derby.client.net.Reply.readDssHeader(Reply.java:317) at org.apache.derby.client.net.Reply.peekCodePoint(Reply.java:1008) at org.apache.derby.client.net.NetResultSetReply.parseCNTQRYreply(NetResultSetReply.java:133) at org.apache.derby.client.net.NetResultSetReply.readFetch(NetResultSetReply.java:42) at org.apache.derby.client.net.ResultSetReply.readFetch(ResultSetReply.java:41) at org.apache.derby.client.net.NetResultSet.readFetch_(NetResultSet.java:206) at org.apache.derby.client.am.ResultSet.flowFetch(ResultSet.java:4138) at org.apache.derby.client.net.NetCursor.getMoreData_(NetCursor.java:1183) at org.apache.derby.client.am.Cursor.stepNext(Cursor.java:176) at org.apache.derby.client.am.Cursor.next(Cursor.java:195) at org.apache.derby.client.am.ResultSet.nextX(ResultSet.java:299) at org.apache.derby.client.am.ResultSet.next(ResultSet.java:269) ... 2 more
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          Hi Kathey, nothing immediately comes to mind, unfortunately.

          Did you run the program with the DRDA protocol tracing (http://wiki.apache.org/db-derby/ProtocolDebuggingTips), and if so can you post the client and server traces?

          Also, can you briefly outline what the test program does? Does it read the entire LOB data from the first row before advancing to the next row? Or does it advance to the next row while there still remains unread LOB data from the previous row?

          Show
          bryanpendleton Bryan Pendleton added a comment - Hi Kathey, nothing immediately comes to mind, unfortunately. Did you run the program with the DRDA protocol tracing ( http://wiki.apache.org/db-derby/ProtocolDebuggingTips ), and if so can you post the client and server traces? Also, can you briefly outline what the test program does? Does it read the entire LOB data from the first row before advancing to the next row? Or does it advance to the next row while there still remains unread LOB data from the previous row?
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          > Simply reversing the column order in the repro eliminates the error

          This is eerily familiar; it seems to me that we hit this same behavior in 10.3 during the locator development, but I don't remember anything more than that.

          Show
          bryanpendleton Bryan Pendleton added a comment - > Simply reversing the column order in the repro eliminates the error This is eerily familiar; it seems to me that we hit this same behavior in 10.3 during the locator development, but I don't remember anything more than that.
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          I see that there are two server DRDA trace files in the zip archive.

          Show
          bryanpendleton Bryan Pendleton added a comment - I see that there are two server DRDA trace files in the zip archive.
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          Is there any evidence that you might be getting an exception on the server side just before the unexpected connection close? I've seen code paths where a thrown exception causes the connection to be closed, and the symptoms are sometimes not so clear, pointing more at the closed connection than at the underlying exception which caused the close.

          Show
          bryanpendleton Bryan Pendleton added a comment - Is there any evidence that you might be getting an exception on the server side just before the unexpected connection close? I've seen code paths where a thrown exception causes the connection to be closed, and the symptoms are sometimes not so clear, pointing more at the closed connection than at the underlying exception which caused the close.
          Hide
          kmarsden Kathey Marsden added a comment -

          >Is there any evidence that you might be getting an exception on the server side just before the unexpected connection close?

          No, the server does not throw an exception until it detects that the client is disconnected, when it is getting ready to flush the lob data.

          >I see that there are two server DRDA trace files in the zip archive.
          Server1.trace.old is the trace with the unchanged server.
          Server1.trace.new is the trace with my changes.
          There is no difference in the traces that I can detect up until the point the client disconnects.

          >can you briefly outline what the test program does? Does it read the entire LOB data from the first row before advancing to the next row? Or does it advance to the next row while there still remains unread LOB data from the previous row?

          It reads the entire lob. for each row before advancing. You can also reproduce with ij even by just retrieving the one offensive row.
          ij> connect 'jdbc:derby://localhost:1527/wombat';
          ij(CONNECTION1)> select a, b from testCLOB_MAIN where b = 10000;
          A

          B
          ------------------------------------------------------------------------------------------------------------------------
          --------------------
          ERROR 58009: Insufficient data while reading from the network - expected a minimum of 6 bytes and received only -1 bytes
          . The connection has been terminated.
          ij(CONNECTION1)>
          Show
          kmarsden Kathey Marsden added a comment - >Is there any evidence that you might be getting an exception on the server side just before the unexpected connection close? No, the server does not throw an exception until it detects that the client is disconnected, when it is getting ready to flush the lob data. >I see that there are two server DRDA trace files in the zip archive. Server1.trace.old is the trace with the unchanged server. Server1.trace.new is the trace with my changes. There is no difference in the traces that I can detect up until the point the client disconnects. >can you briefly outline what the test program does? Does it read the entire LOB data from the first row before advancing to the next row? Or does it advance to the next row while there still remains unread LOB data from the previous row? It reads the entire lob. for each row before advancing. You can also reproduce with ij even by just retrieving the one offensive row. ij> connect 'jdbc:derby://localhost:1527/wombat'; ij(CONNECTION1)> select a, b from testCLOB_MAIN where b = 10000; A B ------------------------------------------------------------------------------------------------------------------------ -------------------- ERROR 58009: Insufficient data while reading from the network - expected a minimum of 6 bytes and received only -1 bytes . The connection has been terminated. ij(CONNECTION1)>
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          Hi Kathey! Just to be clear, to reproduce your results, I should:

          • build a 10.2 server which contains your patch to the DRDA code
          • start up this 10.2 server against the database that's in your zip file
          • use a 10.2 client to run the test Java program that's in your zip file
          • run the test Java program a second time
            Is that the right instructions?
          Show
          bryanpendleton Bryan Pendleton added a comment - Hi Kathey! Just to be clear, to reproduce your results, I should: build a 10.2 server which contains your patch to the DRDA code start up this 10.2 server against the database that's in your zip file use a 10.2 client to run the test Java program that's in your zip file run the test Java program a second time Is that the right instructions?
          Hide
          kmarsden Kathey Marsden added a comment -

          That is right Bryan except running the test java program one time will reproduce the problem. Running it a second time will produce a different error on connection which is a residual issue I think because there is risidual data in the buffer. I don't think that we need to worry about that error if we fix the first one. Running the java program a third time will reproduce the
          insufficient data message again.

          Kathey

          Show
          kmarsden Kathey Marsden added a comment - That is right Bryan except running the test java program one time will reproduce the problem. Running it a second time will produce a different error on connection which is a residual issue I think because there is risidual data in the buffer. I don't think that we need to worry about that error if we fix the first one. Running the java program a third time will reproduce the insufficient data message again. Kathey
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          Hi Kathey. I can definitely reproduce the problem you're seeing.

          Given that the problem occurs with the patched EXTDTAInputStream, and does not occur with the stock 10.2 EXTDTAInputStream, it seems like it's worth studying those EXTDTAInputStream changes in depth to see how they might relate to this problem.

          I've been trying to understand this by building the code with and without the EXTDTAInputStream changes, and running the test case, trying to understand how the server's behavior changes with those changes.

          So far, nothing of significance to report, though

          Show
          bryanpendleton Bryan Pendleton added a comment - Hi Kathey. I can definitely reproduce the problem you're seeing. Given that the problem occurs with the patched EXTDTAInputStream, and does not occur with the stock 10.2 EXTDTAInputStream, it seems like it's worth studying those EXTDTAInputStream changes in depth to see how they might relate to this problem. I've been trying to understand this by building the code with and without the EXTDTAInputStream changes, and running the test case, trying to understand how the server's behavior changes with those changes. So far, nothing of significance to report, though
          Hide
          kmarsden Kathey Marsden added a comment - - edited

          Bryan asked:
          >Is there any evidence that you might be getting an exception on the server side just before the unexpected connection close?

          Stepping through this more carefully, I am seeing an IOException occurring in writeScalarStream() which is causing the disconnection. Not sure why it is occurring, but it seems that we could benefit from better logging when such an exception occurs.

          Below is the trace of the server side I/O Exception. There is no message.
          null
          java.io.IOException
          at org.apache.derby.impl.jdbc.UTF8Reader.read(UTF8Reader.java:97)
          at org.apache.derby.impl.drda.ReEncodedInputStream.reEncode(ReEncodedInputStream.java:78)
          at org.apache.derby.impl.drda.ReEncodedInputStream.read(ReEncodedInputStream.java:143)
          at java.io.InputStream.read(InputStream.java:187)
          at java.io.BufferedInputStream.fill(BufferedInputStream.java:229)
          at java.io.BufferedInputStream.read(BufferedInputStream.java:246)
          at org.apache.derby.impl.drda.EXTDTAInputStream.read(EXTDTAInputStream.java:118)
          at org.apache.derby.impl.drda.DDMWriter.peekStream(DDMWriter.java:1969)
          at org.apache.derby.impl.drda.DDMWriter.writeScalarStream(DDMWriter.java:719)
          at org.apache.derby.impl.drda.DRDAConnThread.writeEXTDTA(DRDAConnThread.java:7806)
          at org.apache.derby.impl.drda.DRDAConnThread.writeQRYDTA(DRDAConnThread.java:6351)
          at org.apache.derby.impl.drda.DRDAConnThread.processCommands(DRDAConnThread.java:677)
          at org.apache.derby.impl.drda.DRDAConnThread.run(DRDAConnThread.java:273)

          Show
          kmarsden Kathey Marsden added a comment - - edited Bryan asked: >Is there any evidence that you might be getting an exception on the server side just before the unexpected connection close? Stepping through this more carefully, I am seeing an IOException occurring in writeScalarStream() which is causing the disconnection. Not sure why it is occurring, but it seems that we could benefit from better logging when such an exception occurs. Below is the trace of the server side I/O Exception. There is no message. null java.io.IOException at org.apache.derby.impl.jdbc.UTF8Reader.read(UTF8Reader.java:97) at org.apache.derby.impl.drda.ReEncodedInputStream.reEncode(ReEncodedInputStream.java:78) at org.apache.derby.impl.drda.ReEncodedInputStream.read(ReEncodedInputStream.java:143) at java.io.InputStream.read(InputStream.java:187) at java.io.BufferedInputStream.fill(BufferedInputStream.java:229) at java.io.BufferedInputStream.read(BufferedInputStream.java:246) at org.apache.derby.impl.drda.EXTDTAInputStream.read(EXTDTAInputStream.java:118) at org.apache.derby.impl.drda.DDMWriter.peekStream(DDMWriter.java:1969) at org.apache.derby.impl.drda.DDMWriter.writeScalarStream(DDMWriter.java:719) at org.apache.derby.impl.drda.DRDAConnThread.writeEXTDTA(DRDAConnThread.java:7806) at org.apache.derby.impl.drda.DRDAConnThread.writeQRYDTA(DRDAConnThread.java:6351) at org.apache.derby.impl.drda.DRDAConnThread.processCommands(DRDAConnThread.java:677) at org.apache.derby.impl.drda.DRDAConnThread.run(DRDAConnThread.java:273)
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          Line 97 of UTF8Reader.java is this:

          public int read(char[] cbuf, int off, int len) throws IOException
          {
          synchronized (lock) {
          // check if closed..
          if (noMoreReads)
          throw new IOException(); <============ This is line 97

          So it would seem that it is complaining about an attempt to read a stream which is already closed?

          Show
          bryanpendleton Bryan Pendleton added a comment - Line 97 of UTF8Reader.java is this: public int read(char[] cbuf, int off, int len) throws IOException { synchronized (lock) { // check if closed.. if (noMoreReads) throw new IOException(); <============ This is line 97 So it would seem that it is complaining about an attempt to read a stream which is already closed?
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          My first attempt to set a breakpoint on line 97 of UTF8Reader was not successful. I'll keep trying.

          Show
          bryanpendleton Bryan Pendleton added a comment - My first attempt to set a breakpoint on line 97 of UTF8Reader was not successful. I'll keep trying.
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          By "not successful", I mean that I didn't hit the breakpoint.

          Show
          bryanpendleton Bryan Pendleton added a comment - By "not successful", I mean that I didn't hit the breakpoint.
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          Putting aside this specific test case for a minute, if you apply your patch to EXTDTAInputStream, do all the rest of the 10.2 regression tests pass successfully?

          Show
          bryanpendleton Bryan Pendleton added a comment - Putting aside this specific test case for a minute, if you apply your patch to EXTDTAInputStream, do all the rest of the 10.2 regression tests pass successfully?
          Hide
          kmarsden Kathey Marsden added a comment -

          Thanks so much Bryan for your help.
          Well, I think I know what the problem is. It comes straight from the getBinaryStream javadoc and I assume getCharacterStream works the same way.

          Note: All the data in the returned stream must be read prior to getting the value of any other column. The next call to a getter method implicitly closes the stream. Also, a stream may return 0 when the method InputStream.available is called whether there is data available or not.

          Since we are doing the getInt() after the getCharacterStream(), it closes the stream. So, my patch is no good. In order to use ResultSet.getCharacterStream, ResultSet.getBinaryStream() I am going to have to defer opening the stream until it is ready to send. I'll see if that is possible. I think it should be because we don't need to know if the stream is empty until that time.

          Show
          kmarsden Kathey Marsden added a comment - Thanks so much Bryan for your help. Well, I think I know what the problem is. It comes straight from the getBinaryStream javadoc and I assume getCharacterStream works the same way. Note: All the data in the returned stream must be read prior to getting the value of any other column. The next call to a getter method implicitly closes the stream. Also, a stream may return 0 when the method InputStream.available is called whether there is data available or not. Since we are doing the getInt() after the getCharacterStream(), it closes the stream. So, my patch is no good. In order to use ResultSet.getCharacterStream, ResultSet.getBinaryStream() I am going to have to defer opening the stream until it is ready to send. I'll see if that is possible. I think it should be because we don't need to know if the stream is empty until that time.
          Hide
          bryanpendleton Bryan Pendleton added a comment -

          >Since we are doing the getInt() after the getCharacterStream(), it closes the stream.

          Thanks Kathey, that makes perfect sense. Good work on figuring this out!

          Show
          bryanpendleton Bryan Pendleton added a comment - >Since we are doing the getInt() after the getCharacterStream(), it closes the stream. Thanks Kathey, that makes perfect sense. Good work on figuring this out!
          Hide
          kmarsden Kathey Marsden added a comment -

          Well deferring the opening of the stream until it is time to write the EXTDTA will not work because we need to know whether the value is null when we send the QRYDTA. So I guess its back to the drawing board. I don't have any good ideas for a fix at this point. The 10.1 solution was to read the whole lob into memory. I don't think we want to return to that for 10.2. Oystein had mentioned the possibility of changing the locking behavior as a solution, but I am way out of my league in that area so won't pursue that for 10.2. Hopefully if it is changed for 10.3, we can backport the fix.

          Oystein mentioned two possibilities for 10.3

          1. Change how the locking is done. Maybe one could provide away to
          release locks when they are no longer needed.

          By this do you mean to release the locks when the LOB is no longer referenced? That sounds good, but may still cause issues if the garbage collector has not kicked in, but perhaps would be suitable for backport to 10.2

          2. Make a copy of the LOB value and allow concurrent updates. In
          10.3 this is now possible since there is a mechanism for making
          temporary copies of LOBs. In order for this to be efficient, we
          should only make a copy when necessary. Hence, a copy-on-write
          mechanism would be useful.

          This sounds ok too but does not offer any solution for 10.2, so I guess I would prefer 1.

          Show
          kmarsden Kathey Marsden added a comment - Well deferring the opening of the stream until it is time to write the EXTDTA will not work because we need to know whether the value is null when we send the QRYDTA. So I guess its back to the drawing board. I don't have any good ideas for a fix at this point. The 10.1 solution was to read the whole lob into memory. I don't think we want to return to that for 10.2. Oystein had mentioned the possibility of changing the locking behavior as a solution, but I am way out of my league in that area so won't pursue that for 10.2. Hopefully if it is changed for 10.3, we can backport the fix. Oystein mentioned two possibilities for 10.3 1. Change how the locking is done. Maybe one could provide away to release locks when they are no longer needed. By this do you mean to release the locks when the LOB is no longer referenced? That sounds good, but may still cause issues if the garbage collector has not kicked in, but perhaps would be suitable for backport to 10.2 2. Make a copy of the LOB value and allow concurrent updates. In 10.3 this is now possible since there is a mechanism for making temporary copies of LOBs. In order for this to be efficient, we should only make a copy when necessary. Hence, a copy-on-write mechanism would be useful. This sounds ok too but does not offer any solution for 10.2, so I guess I would prefer 1.
          Hide
          kmarsden Kathey Marsden added a comment -

          I checked in the test case jdbcapi/LargeDataLocks to 10.2. It should be uncommented in the jdbcapi suite once this bug is fixed in 10.2. The test on the trunk is the junit test jdbcapi/LargeDataLocksTest, also disabled.

          Kathey

          Show
          kmarsden Kathey Marsden added a comment - I checked in the test case jdbcapi/LargeDataLocks to 10.2. It should be uncommented in the jdbcapi suite once this bug is fixed in 10.2. The test on the trunk is the junit test jdbcapi/LargeDataLocksTest, also disabled. Kathey
          Hide
          kmarsden Kathey Marsden added a comment -

          Attached is a new 10.2 patch for this issue.
          I figured out how to determine if the value is null or empty before retrieving the stream. What I did was enhance the EngineResultSet interface to have a isNull(int columnIndex) and a getLength(int columnIndex) method that can be called before getBinaryStream() or getCharacterStream() is called. This eliminates the need to read a byte of the stream to see if it is empty or not.

          I would like to know if this is a reasonable approach and would appreciate review of the patch. I ran derbynetclientmats and the LargeDataLocks test with the patch and am running derbyall now.

          Show
          kmarsden Kathey Marsden added a comment - Attached is a new 10.2 patch for this issue. I figured out how to determine if the value is null or empty before retrieving the stream. What I did was enhance the EngineResultSet interface to have a isNull(int columnIndex) and a getLength(int columnIndex) method that can be called before getBinaryStream() or getCharacterStream() is called. This eliminates the need to read a byte of the stream to see if it is empty or not. I would like to know if this is a reasonable approach and would appreciate review of the patch. I ran derbynetclientmats and the LargeDataLocks test with the patch and am running derbyall now.
          Hide
          naka Tomohito Nakayama added a comment -

          Thank you for working this issue ....

          I remember information about blobclob4blob in DERBY-326 which may be a support.

          Before the DERBY-326, the result of clobTest92 was different between Embedded and NetworkServer/NetworkCient.
          After DERBY-326, the results of clobTest92 are same between Embedded and NetworkServer/NetworkCient.

          Related file for 10.2 branch is below.

          Embedded :
          https://svn.apache.org/repos/asf/db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/blobclob4BLOB.out

          NetworkServer/NetworkClient :
          https://svn.apache.org/repos/asf/db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/DerbyNet/blobclob4BLOB.out
          https://svn.apache.org/repos/asf/db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/DerbyNetClient/blobclob4BLOB.out

          Show
          naka Tomohito Nakayama added a comment - Thank you for working this issue .... I remember information about blobclob4blob in DERBY-326 which may be a support. Before the DERBY-326 , the result of clobTest92 was different between Embedded and NetworkServer/NetworkCient. After DERBY-326 , the results of clobTest92 are same between Embedded and NetworkServer/NetworkCient. Related file for 10.2 branch is below. Embedded : https://svn.apache.org/repos/asf/db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/blobclob4BLOB.out NetworkServer/NetworkClient : https://svn.apache.org/repos/asf/db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/DerbyNet/blobclob4BLOB.out https://svn.apache.org/repos/asf/db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/DerbyNetClient/blobclob4BLOB.out
          Hide
          kmarsden Kathey Marsden added a comment -

          Thanks Tomohito for looking at this issue. I think that with the fix for DERBY-255 I did not submit sufficient testing for the getString(), getCharacterStream(), getBytes(), getBinaryStream() cases where locks should not be held. This has hopefully been rectified with the addition of the LargeDataLocks test. I attached a patch to DERBY-2941 to fix the issue and enable that test.

          Show
          kmarsden Kathey Marsden added a comment - Thanks Tomohito for looking at this issue. I think that with the fix for DERBY-255 I did not submit sufficient testing for the getString(), getCharacterStream(), getBytes(), getBinaryStream() cases where locks should not be held. This has hopefully been rectified with the addition of the LargeDataLocks test. I attached a patch to DERBY-2941 to fix the issue and enable that test.
          Hide
          kmarsden Kathey Marsden added a comment -

          Now that DERBY-2941 is resolved, this regression only affects 10.3 and 10.4 which use lob locators. Oystein mentioned two possibilities for approaching this issue in 10.3. I previously stated that I would prefer changing how the locking is done so we could backport to 10.2, but that is no longer a consideration.

          Show
          kmarsden Kathey Marsden added a comment - Now that DERBY-2941 is resolved, this regression only affects 10.3 and 10.4 which use lob locators. Oystein mentioned two possibilities for approaching this issue in 10.3. I previously stated that I would prefer changing how the locking is done so we could backport to 10.2, but that is no longer a consideration.
          Hide
          oysteing Øystein Grøvlen added a comment -

          I think I would prefer that one did not use database locks to
          achieving stability for LOB objects. Locks are intended for
          transaction isolation, not result set isolation. I guess the main
          motivation for using locking is to avoid reading the entire LOB before
          it is accessed. That is, when ResultSet.getBlob is called the Blob
          value is not read; the reading is delayed until one performs
          operations on that Blob. Pre-10.3, one major problem with making a
          copy of LOB would be that the entire LOB would have to be stored in
          main-memory. In 10.3, we have a mechanism for storing LOB values
          temporarily on disk.

          One alternative would be to always make a copy when getBlob/getClob is
          called. That could significantly affect the performance of such an
          operation, but users can use ResultSet.getBinaryStream if they want
          to read a Blob with less overhead.

          A more performant solution would be to delay copying until it is
          needed. That is, when someone else tries to update the LOB, a copy is
          made. I think such conflicts are not very typically for databases
          with LOBs, so copying will normally not be done very often. However,
          one need some mechanism for detecting that the LOB to be updated are
          currently read by others.

          I am trying to get my mind around what is required here. I do not
          have a full undestanding yet, but here are some aspects that need to
          be considered:

          • When locators are used, a locator will map to an Blob/Clob object
            on the server. Client operations will be mapped to operations on
            the Blob/Clob objects. This makes the current locking mechanism
            not work as intended since you will get Blob/Clob objects without
            doing explicit getBlob/getClob calls.
          • The server is not able to distinguish whether an operation is
            performed on a Blob/Clob or directly on the ResultSet. E.g.,
            Blob.getBinaryStream and ResultSet.getBinaryStream are local
            operations on the client. Read operations on the streams will map
            to Blob.getBytes on the server.
          • Locators are not used for all types of client operations. E.g.,
            ResultSet.updateBinaryStream or PreparedStatement.setBinaryStream
            will set up a stream that maps to a stream on the server.
          • Updates to Blob/Clob objects will cause the object to be copied to
            temporary storage. The updates will be applied to the database
            when the Blob/Clob object is used to update a row. E.g., If
            ResultSet.updateBlob is used, the update will happen when
            ResultSet.updateRow is called. If PrepareStatement.setBlob is used,
            it will happen when the prepared statement is executeed.
          • We will still need a mechanism to protect LOB values of the current
            row of a ResultSet from being updated. I am not familiar with the
            current mechanism here.
          Show
          oysteing Øystein Grøvlen added a comment - I think I would prefer that one did not use database locks to achieving stability for LOB objects. Locks are intended for transaction isolation, not result set isolation. I guess the main motivation for using locking is to avoid reading the entire LOB before it is accessed. That is, when ResultSet.getBlob is called the Blob value is not read; the reading is delayed until one performs operations on that Blob. Pre-10.3, one major problem with making a copy of LOB would be that the entire LOB would have to be stored in main-memory. In 10.3, we have a mechanism for storing LOB values temporarily on disk. One alternative would be to always make a copy when getBlob/getClob is called. That could significantly affect the performance of such an operation, but users can use ResultSet.getBinaryStream if they want to read a Blob with less overhead. A more performant solution would be to delay copying until it is needed. That is, when someone else tries to update the LOB, a copy is made. I think such conflicts are not very typically for databases with LOBs, so copying will normally not be done very often. However, one need some mechanism for detecting that the LOB to be updated are currently read by others. I am trying to get my mind around what is required here. I do not have a full undestanding yet, but here are some aspects that need to be considered: When locators are used, a locator will map to an Blob/Clob object on the server. Client operations will be mapped to operations on the Blob/Clob objects. This makes the current locking mechanism not work as intended since you will get Blob/Clob objects without doing explicit getBlob/getClob calls. The server is not able to distinguish whether an operation is performed on a Blob/Clob or directly on the ResultSet. E.g., Blob.getBinaryStream and ResultSet.getBinaryStream are local operations on the client. Read operations on the streams will map to Blob.getBytes on the server. Locators are not used for all types of client operations. E.g., ResultSet.updateBinaryStream or PreparedStatement.setBinaryStream will set up a stream that maps to a stream on the server. Updates to Blob/Clob objects will cause the object to be copied to temporary storage. The updates will be applied to the database when the Blob/Clob object is used to update a row. E.g., If ResultSet.updateBlob is used, the update will happen when ResultSet.updateRow is called. If PrepareStatement.setBlob is used, it will happen when the prepared statement is executeed. We will still need a mechanism to protect LOB values of the current row of a ResultSet from being updated. I am not familiar with the current mechanism here.
          Hide
          kmarsden Kathey Marsden added a comment -

          Thank you Oystein for sorting out the complexities of this issue.

          Show
          kmarsden Kathey Marsden added a comment - Thank you Oystein for sorting out the complexities of this issue.
          Hide
          dagw Dag H. Wanvik added a comment -

          > - We will still need a mechanism to protect LOB values of the current
          > row of a ResultSet from being updated. I am not familiar with the
          > current mechanism here.

          As long as you are positioned on a row in the result set, a lock is
          set for that row (R or U if you have an updatable result set), which
          should protect the lob as long as you are positioned on it.

          In a scrollable result set, a copy is takes of the row, so after
          positioning away from the row, the underlying lob would not be
          accessed via this result set again until the row is possibly updated
          (if the result set is updatable). The U-lock on the row is reset when
          repositioning back in that case.

          Show
          dagw Dag H. Wanvik added a comment - > - We will still need a mechanism to protect LOB values of the current > row of a ResultSet from being updated. I am not familiar with the > current mechanism here. As long as you are positioned on a row in the result set, a lock is set for that row (R or U if you have an updatable result set), which should protect the lob as long as you are positioned on it. In a scrollable result set, a copy is takes of the row, so after positioning away from the row, the underlying lob would not be accessed via this result set again until the row is possibly updated (if the result set is updatable). The U-lock on the row is reset when repositioning back in that case.
          Hide
          kmarsden Kathey Marsden added a comment -

          This is a critial issue and a regresssion that has been hit by users in the past moving to 10.2. It is now fixed in 10.2 and regresses again going to 10.3. I am marking it Blocker because it seems like such a likely user hit. I realize nobody is working on it and it is likely to get moved back down to urgent, but it just seems wrong to re-regress the product with 10.3.

          Kathey

          Show
          kmarsden Kathey Marsden added a comment - This is a critial issue and a regresssion that has been hit by users in the past moving to 10.2. It is now fixed in 10.2 and regresses again going to 10.3. I am marking it Blocker because it seems like such a likely user hit. I realize nobody is working on it and it is likely to get moved back down to urgent, but it just seems wrong to re-regress the product with 10.3. Kathey
          Hide
          rhillegas Rick Hillegas added a comment -

          Reverting the Urgency to normal. This issue occurs in 10.2. I do not believe that this old issue should block the release of other quality improvements.

          Show
          rhillegas Rick Hillegas added a comment - Reverting the Urgency to normal. This issue occurs in 10.2. I do not believe that this old issue should block the release of other quality improvements.
          Hide
          kmarsden Kathey Marsden added a comment -

          Note: This issue has been resolved in 10.2 with DERBY-2941

          Show
          kmarsden Kathey Marsden added a comment - Note: This issue has been resolved in 10.2 with DERBY-2941
          Hide
          naka Tomohito Nakayama added a comment -

          I executed LargeDataLocksTest in the both cases of embedded and client.
          The result was that test passes in the embedded case but fails in the client case.

          In both cases of embedded and client, the only test code closes the resultset.....
          I wonder what is the exact difference which causes this result ....

          Show
          naka Tomohito Nakayama added a comment - I executed LargeDataLocksTest in the both cases of embedded and client. The result was that test passes in the embedded case but fails in the client case. In both cases of embedded and client, the only test code closes the resultset..... I wonder what is the exact difference which causes this result ....
          Hide
          naka Tomohito Nakayama added a comment -

          I added the test code as next to LargeDataLocksTest and
          found that the behavior was same between embedded and client if explicitly getBlob.

          + public void testGetBlob() thr ows SQLException, IOException {
          + int numBytes = 0;
          + getConnection().setAutoCommit(false);
          + Statement stmt = createStatement();
          + String sql = "SELECT bincol from t1";
          + ResultSet rs = stmt.executeQuery(sql);
          + rs.next();
          + Blob blob = rs.getBlob(1);
          + InputStream stream = blob.getBinaryStream();
          + int read = stream.read();
          + while (read != -1)

          { + read = stream.read(); + numBytes++; + }

          + assertEquals(38000, numBytes);
          + rs.close();
          + assertEquals(0, countLocks());
          + commit();
          + }
          +
          +

          The result for embedded:
          1) testGetBlob(org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest)junit.framework.AssertionFailedError: expected:<0> but was:<2>
          at org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest.testGetBlob(LargeDataLocksTest.java:142)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:95)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:23)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)

          The result for client :
          2) testGetBlob(org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest)junit.framework.AssertionFailedError: expected:<0> but was:<2>
          at org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest.testGetBlob(LargeDataLocksTest.java:142)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:95)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:23)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:23)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:23)

          Reading comments, I understood background of this result is locking for LOB.

          Show
          naka Tomohito Nakayama added a comment - I added the test code as next to LargeDataLocksTest and found that the behavior was same between embedded and client if explicitly getBlob. + public void testGetBlob() thr ows SQLException, IOException { + int numBytes = 0; + getConnection().setAutoCommit(false); + Statement stmt = createStatement(); + String sql = "SELECT bincol from t1"; + ResultSet rs = stmt.executeQuery(sql); + rs.next(); + Blob blob = rs.getBlob(1); + InputStream stream = blob.getBinaryStream(); + int read = stream.read(); + while (read != -1) { + read = stream.read(); + numBytes++; + } + assertEquals(38000, numBytes); + rs.close(); + assertEquals(0, countLocks()); + commit(); + } + + The result for embedded: 1) testGetBlob(org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest)junit.framework.AssertionFailedError: expected:<0> but was:<2> at org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest.testGetBlob(LargeDataLocksTest.java:142) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:95) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:23) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) The result for client : 2) testGetBlob(org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest)junit.framework.AssertionFailedError: expected:<0> but was:<2> at org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest.testGetBlob(LargeDataLocksTest.java:142) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:95) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:23) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:23) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:23) Reading comments, I understood background of this result is locking for LOB.
          Hide
          naka Tomohito Nakayama added a comment -

          Now, I came to think that the current implementation of client / server of derby have problem around
          ResultSet#getBinaryStream method (and might be other getXXXXStream also).

          My intuitive thinking is that features of locater may be not needed for this kind of method,
          which are getXXXXStream and not getXlob.

          Show
          naka Tomohito Nakayama added a comment - Now, I came to think that the current implementation of client / server of derby have problem around ResultSet#getBinaryStream method (and might be other getXXXXStream also). My intuitive thinking is that features of locater may be not needed for this kind of method, which are getXXXXStream and not getXlob.
          Hide
          narayanan V.Narayanan added a comment -

          >Now, I came to think that the current implementation
          >of client / server of derby have problem around
          >ResultSet#getBinaryStream method (and might be
          >other getXXXXStream also).

          Features of locator are not present for the ResultSet
          and the PreparedStatement methods.

          Relevant comments from Derby-2604

          >As already discussed in the JIRA issue that was raised for
          >PreparedStatements and CallableStatements
          >(https://issues.apache.org/jira/browse/DERBY-2529) there
          >are no changes needed here related to Clob also.

          >A similar case would exist for ResultSets also since
          >the LOB in this case is also B-Layer streamed and
          >there would be no significant improvement with using locators.

          Show
          narayanan V.Narayanan added a comment - >Now, I came to think that the current implementation >of client / server of derby have problem around >ResultSet#getBinaryStream method (and might be >other getXXXXStream also). Features of locator are not present for the ResultSet and the PreparedStatement methods. Relevant comments from Derby-2604 >As already discussed in the JIRA issue that was raised for >PreparedStatements and CallableStatements >( https://issues.apache.org/jira/browse/DERBY-2529 ) there >are no changes needed here related to Clob also. >A similar case would exist for ResultSets also since >the LOB in this case is also B-Layer streamed and >there would be no significant improvement with using locators.
          Hide
          kmarsden Kathey Marsden added a comment - - edited

          V. Narayanan said
          >Features of locator are not present for the ResultSet
          >and the PreparedStatement methods.

          I am a little confused by this. Are you saying that a ResultSet.getBinaryStream() on the client should not be translating into a getBlob() on the server. That does not seem to be the case in the trunk/10.3. Can you try running jdbcapi/LargeDataLocks.java and see if perhaps this can be resolved easily.

          Show
          kmarsden Kathey Marsden added a comment - - edited V. Narayanan said >Features of locator are not present for the ResultSet >and the PreparedStatement methods. I am a little confused by this. Are you saying that a ResultSet.getBinaryStream() on the client should not be translating into a getBlob() on the server. That does not seem to be the case in the trunk/10.3. Can you try running jdbcapi/LargeDataLocks.java and see if perhaps this can be resolved easily.
          Hide
          narayanan V.Narayanan added a comment -

          >I am a little confused by this. Are you saying that a ResultSet.getBinaryStream()
          >on the client should not be translating into a getBlob() on the server. That does
          >not seem to be the case in the trunk/10.3. Can you try running jdbcapi/LargeDataLocks.java
          >and see if perhaps this can be resolved easily.

          While working on locators, the ResultSet nor the PreparedStatement methods were
          translated to stored procedure calls to embedded methodssince this would be inefficient
          when compared to the Layer B Streaming and no materialization of LOB's was
          happening that needed to be avoided. Hence it was OK to retain the current way it is done.

          Internally I am not sure if the translation of the getBlob is happening at present.

          Show
          narayanan V.Narayanan added a comment - >I am a little confused by this. Are you saying that a ResultSet.getBinaryStream() >on the client should not be translating into a getBlob() on the server. That does >not seem to be the case in the trunk/10.3. Can you try running jdbcapi/LargeDataLocks.java >and see if perhaps this can be resolved easily. While working on locators, the ResultSet nor the PreparedStatement methods were translated to stored procedure calls to embedded methodssince this would be inefficient when compared to the Layer B Streaming and no materialization of LOB's was happening that needed to be avoided. Hence it was OK to retain the current way it is done. Internally I am not sure if the translation of the getBlob is happening at present.
          Hide
          narayanan V.Narayanan added a comment -

          Sorry I meant "Neither the ResultSet nor the PreparedStatement methods were
          translated to stored procedure calls to embedded methods" (i.e.) They were not
          translated to stored procedure calls.

          Show
          narayanan V.Narayanan added a comment - Sorry I meant "Neither the ResultSet nor the PreparedStatement methods were translated to stored procedure calls to embedded methods" (i.e.) They were not translated to stored procedure calls.
          Hide
          kmarsden Kathey Marsden added a comment -

          I am not sure how stored procedures come into it, but what I have been seeing is that with a 10.2 client or lower the DRDAType for getBinaryStream is DRDAConstants.DRDA_TYPE_NLOBBYTES, so we instantiate a EXTDTAInputStream and with the fix for DERBY-2941 all is ok, BUT if we use a 10.3 client the DRDAType is DRDAConstants.DRDA_TYPE_NLOBLOC so we do a getObject()(aka getBlob) on the column and it holds locks.

          Also I want to clarify that this issue is fixed with 10.2.2.1 clients and above with the fix for DERBY-2941. It regresses with 10.3 clients and higher. So users affected by this issue, should not upgrade to 10.3.

          Narayanan, do you have time to look and see if the change to use DRDA_TYPE_NLOBLOC for this case was necessary. This is a serious regression in 10.3 that is likely to affect users. It would be good to get it addressed as soon as possible and I am not familiar enough with the Locator code to say how it should/ should not be.

          You can run the test jdbcapi/LargeDataLocks which is checked into the trunk but is not running at this time on the trunk due to this issue.

          Show
          kmarsden Kathey Marsden added a comment - I am not sure how stored procedures come into it, but what I have been seeing is that with a 10.2 client or lower the DRDAType for getBinaryStream is DRDAConstants.DRDA_TYPE_NLOBBYTES, so we instantiate a EXTDTAInputStream and with the fix for DERBY-2941 all is ok, BUT if we use a 10.3 client the DRDAType is DRDAConstants.DRDA_TYPE_NLOBLOC so we do a getObject()(aka getBlob) on the column and it holds locks. Also I want to clarify that this issue is fixed with 10.2.2.1 clients and above with the fix for DERBY-2941 . It regresses with 10.3 clients and higher. So users affected by this issue, should not upgrade to 10.3. Narayanan, do you have time to look and see if the change to use DRDA_TYPE_NLOBLOC for this case was necessary. This is a serious regression in 10.3 that is likely to affect users. It would be good to get it addressed as soon as possible and I am not familiar enough with the Locator code to say how it should/ should not be. You can run the test jdbcapi/LargeDataLocks which is checked into the trunk but is not running at this time on the trunk due to this issue.
          Hide
          oysteing Øystein Grøvlen added a comment -

          To answer Kathey's question: On the server, a stable mapping is needed
          between the locator ID and the Blob value is needed. Earlier, you
          copy the entire Blob value to the client in one round-trip. Now, you
          copy as much data as requested by the client. It seems natural to use
          a Blob object to represent the Blob value between round-trips. Hence,
          I think getting a Blob object is necessary.

          As I said earlier, I would prefer to fix this by using another
          mechanism than locking to guarantee the stability of a Blob value. I
          plan to look closer into how to fix this in a week or two.

          Show
          oysteing Øystein Grøvlen added a comment - To answer Kathey's question: On the server, a stable mapping is needed between the locator ID and the Blob value is needed. Earlier, you copy the entire Blob value to the client in one round-trip. Now, you copy as much data as requested by the client. It seems natural to use a Blob object to represent the Blob value between round-trips. Hence, I think getting a Blob object is necessary. As I said earlier, I would prefer to fix this by using another mechanism than locking to guarantee the stability of a Blob value. I plan to look closer into how to fix this in a week or two.
          Hide
          kmarsden Kathey Marsden added a comment -

          Oysten said:
          >I plan to look closer into how to fix this in a week or two.

          Thanks Oysten!

          Show
          kmarsden Kathey Marsden added a comment - Oysten said: >I plan to look closer into how to fix this in a week or two. Thanks Oysten!
          Hide
          kmarsden Kathey Marsden added a comment -

          Hello Oystein. I just wanted to check in on this issue. Have you been able to look closer at solution options?

          Thanks

          Kathey

          Show
          kmarsden Kathey Marsden added a comment - Hello Oystein. I just wanted to check in on this issue. Have you been able to look closer at solution options? Thanks Kathey
          Hide
          oysteing Øystein Grøvlen added a comment -

          I have started to look a bit more at this issue. I have not got very
          far yet, but I have identified that the locking of the record
          containing the Blob occurs in OverflowInputStream#initStream. Here an
          intent-shared lock on the table and a record lock for the record
          contain the Blob is obtained. As far as I can tell, the table lock is
          for the duration of the transaction while the record lock follows the
          transaction's isolation level. Does this mean that it is the table
          intent lock that is the problem here?

          If I run the LargeDataLocks program attached to DERBY-255 it shows
          that a IS table lock remains. There is some discussion in
          OverflowInputStream#initStream on whether on should use the
          contain's isolation level also for the intent lock, and not always use
          REPEATABLE READ, but it says:

          To do this, need to consider:
          Sometimes the container's locking policy may NOT reflect the
          correct locking policy. For example, if the container is a
          table (not an index) and Access handles the locking of the
          table via an index, the container's locking policy would be
          set to do no locking. Moreover, if the container is an index,
          the locking policy would always be set to do no locking.

          I guess I have to read up on the use of intent locks and the
          difference between locking policies of transactions and containers
          before I can understand what this means.

          Show
          oysteing Øystein Grøvlen added a comment - I have started to look a bit more at this issue. I have not got very far yet, but I have identified that the locking of the record containing the Blob occurs in OverflowInputStream#initStream. Here an intent-shared lock on the table and a record lock for the record contain the Blob is obtained. As far as I can tell, the table lock is for the duration of the transaction while the record lock follows the transaction's isolation level. Does this mean that it is the table intent lock that is the problem here? If I run the LargeDataLocks program attached to DERBY-255 it shows that a IS table lock remains. There is some discussion in OverflowInputStream#initStream on whether on should use the contain's isolation level also for the intent lock, and not always use REPEATABLE READ, but it says: To do this, need to consider: Sometimes the container's locking policy may NOT reflect the correct locking policy. For example, if the container is a table (not an index) and Access handles the locking of the table via an index, the container's locking policy would be set to do no locking. Moreover, if the container is an index, the locking policy would always be set to do no locking. I guess I have to read up on the use of intent locks and the difference between locking policies of transactions and containers before I can understand what this means.
          Hide
          oysteing Øystein Grøvlen added a comment -

          My previous comment is mistaken. The LargeDataLocks program show that
          both a table intent lock and a row lock remains:

          Select * from new org.apache.derby.diag.LockTable() as LT
          XID,TYPE,MODE,TABLENAME,LOCKNAME,STATE,TABLETYPE,LOCKCOUNT,INDEXNAME
          203,TABLE,IS,T1,Tablelock,GRANT,T,1,null
          203,ROW,S,T1,(2,6),GRANT,T,1,null

          Sorry for the confusion.

          While copy-on-write seems like a good idea, I am afraid it will be
          quite some work to implement a new mechanism like that, and I think I
          will try a simpler approach first.

          My first attempt will try to see if there is some way to make the
          engine release locks when locators are freed. Then, the client could
          release locators associated with streams when next()/close() is called
          on the result set.

          Show
          oysteing Øystein Grøvlen added a comment - My previous comment is mistaken. The LargeDataLocks program show that both a table intent lock and a row lock remains: Select * from new org.apache.derby.diag.LockTable() as LT XID,TYPE,MODE,TABLENAME,LOCKNAME,STATE,TABLETYPE,LOCKCOUNT,INDEXNAME 203,TABLE,IS,T1,Tablelock,GRANT,T,1,null 203,ROW,S,T1,(2,6),GRANT,T,1,null Sorry for the confusion. While copy-on-write seems like a good idea, I am afraid it will be quite some work to implement a new mechanism like that, and I think I will try a simpler approach first. My first attempt will try to see if there is some way to make the engine release locks when locators are freed. Then, the client could release locators associated with streams when next()/close() is called on the result set.
          Hide
          oysteing Øystein Grøvlen added a comment -

          So my plan to fix this has two parts:

          1. Change the locking so that the lock for a LOB is released when the
          LOB object is closed. My studies so far indicates that this can be
          achieved as follows:

          a) Use read committed instead of repeatable read for the locking
          policy in OverflowInputStream#initStream. This will associate
          the lock with the BaseContainerHandle that owns the
          OverflowInputStream instead of the transaction.

          b) Release the locks for the BaseContainerHandle that owns the
          OverflowInputStream when it is closed. (Debugging shows that
          OverflowInputStream#close is called when a Blob/Clob object is
          freed.)

          2. Make sure a client releases locators when they are not to be used
          anymore. That is, the procedure to release a locator obtained by
          getBinaryStream etc. will be called when next() or close() is
          called on the result set.

          Since, according to the JDBC spec, such streams are only valid
          until the next getXXX call, there should only be necessary to keep
          track of one such locator at a time. So when a new stream is
          opened, the previous locator can be released. Hence, it should not
          be necessary to maintain a set of locators for the current row, one
          single "current" locator per result set is sufficient.

          Show
          oysteing Øystein Grøvlen added a comment - So my plan to fix this has two parts: 1. Change the locking so that the lock for a LOB is released when the LOB object is closed. My studies so far indicates that this can be achieved as follows: a) Use read committed instead of repeatable read for the locking policy in OverflowInputStream#initStream. This will associate the lock with the BaseContainerHandle that owns the OverflowInputStream instead of the transaction. b) Release the locks for the BaseContainerHandle that owns the OverflowInputStream when it is closed. (Debugging shows that OverflowInputStream#close is called when a Blob/Clob object is freed.) 2. Make sure a client releases locators when they are not to be used anymore. That is, the procedure to release a locator obtained by getBinaryStream etc. will be called when next() or close() is called on the result set. Since, according to the JDBC spec, such streams are only valid until the next getXXX call, there should only be necessary to keep track of one such locator at a time. So when a new stream is opened, the previous locator can be released. Hence, it should not be necessary to maintain a set of locators for the current row, one single "current" locator per result set is sufficient.
          Hide
          kmarsden Kathey Marsden added a comment -

          Oystein said ..

          >2. Make sure a client releases locators when they are not to be used
          > anymore. That is, the procedure to release a locator obtained by
          > getBinaryStream etc. will be called when next() or close() is
          >. called on the result set.

          This makes sense for getBinaryStream, but for Blobs the spec says
          "A Blob object is valid for the duration of the transaction in which is was created."

          Will that still be the case with your propsed change?

          Show
          kmarsden Kathey Marsden added a comment - Oystein said .. >2. Make sure a client releases locators when they are not to be used > anymore. That is, the procedure to release a locator obtained by > getBinaryStream etc. will be called when next() or close() is >. called on the result set. This makes sense for getBinaryStream, but for Blobs the spec says "A Blob object is valid for the duration of the transaction in which is was created." Will that still be the case with your propsed change?
          Hide
          oysteing Øystein Grøvlen added a comment -

          Kathey Marsden (JIRA) wrote:
          > Oystein said ..
          >
          >> 2. Make sure a client releases locators when they are not to be used
          >> anymore. That is, the procedure to release a locator obtained by
          >> getBinaryStream etc. will be called when next() or close() is
          >> . called on the result set.
          >
          > This makes sense for getBinaryStream, but for Blobs the spec says
          > "A Blob object is valid for the duration of the transaction in which is was created."
          >
          > Will that still be the case with your propsed change?

          Yes, locators obtained when doing ResultSet.get[BC]lob will not be released, only locators obtained by other ResultSet.getXXX methods (e.g., getBinaryStream).

          Show
          oysteing Øystein Grøvlen added a comment - Kathey Marsden (JIRA) wrote: > Oystein said .. > >> 2. Make sure a client releases locators when they are not to be used >> anymore. That is, the procedure to release a locator obtained by >> getBinaryStream etc. will be called when next() or close() is >> . called on the result set. > > This makes sense for getBinaryStream, but for Blobs the spec says > "A Blob object is valid for the duration of the transaction in which is was created." > > Will that still be the case with your propsed change? Yes, locators obtained when doing ResultSet.get [BC] lob will not be released, only locators obtained by other ResultSet.getXXX methods (e.g., getBinaryStream).
          Hide
          oysteing Øystein Grøvlen added a comment -

          DERBY-3098 will handle part 1 as described above.

          Show
          oysteing Øystein Grøvlen added a comment - DERBY-3098 will handle part 1 as described above.
          Hide
          oysteing Øystein Grøvlen added a comment -

          Attached patch derby-2892firstshot.diff makes LargeDataLocksTest also work for client/server. The approach is to make sure that closing a stream will free the underlying Blob, and in addition when closing a result set, the open stream needs to be closed.

          However, this fix will also have affect when closing streams that was obtained from a Blob object instead of directly from the result set. This is not correct since such Blob objects should live until end of transaction unless explicitly closed. This causes a few test cases in BlobClob4BlobTest to be broken.

          Hence, the attached patch needs to be modified to distinguish between streams that were obtained from a result set, and streams that were obtained from a Blob object.

          Show
          oysteing Øystein Grøvlen added a comment - Attached patch derby-2892firstshot.diff makes LargeDataLocksTest also work for client/server. The approach is to make sure that closing a stream will free the underlying Blob, and in addition when closing a result set, the open stream needs to be closed. However, this fix will also have affect when closing streams that was obtained from a Blob object instead of directly from the result set. This is not correct since such Blob objects should live until end of transaction unless explicitly closed. This causes a few test cases in BlobClob4BlobTest to be broken. Hence, the attached patch needs to be modified to distinguish between streams that were obtained from a result set, and streams that were obtained from a Blob object.
          Hide
          oysteing Øystein Grøvlen added a comment -

          Fixing this issue will create backward-compatibility issues. For a Blob/Clob column of a result set, only one get method can be called and only once. For example, after executing ResultSet.getBinaryStream on a column, all following get methods (e.g., getBlob, getBinaryStream, getBytes) on this column wil fail.

          Show
          oysteing Øystein Grøvlen added a comment - Fixing this issue will create backward-compatibility issues. For a Blob/Clob column of a result set, only one get method can be called and only once. For example, after executing ResultSet.getBinaryStream on a column, all following get methods (e.g., getBlob, getBinaryStream, getBytes) on this column wil fail.
          Hide
          oysteing Øystein Grøvlen added a comment -

          Attached is a patch that solves the reported problem, and runs without
          errors in the current test suites (suites.All and derbyall). However,
          I do not feel confident that there is sufficient testing in this area
          to verify the fix. I do not have time to add more tests right now,
          but I hope to get back to this later.

          Note also that this fix may affect existing applications since from
          now only one get method per Blob/Clob column may now be called per
          row of the result set.

          The following describes the fixes in more detail:

          M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/LargeDataLocksTest.java
          Activate test also for client server to test the bug fix.

          M java/client/org/apache/derby/client/am/Cursor.java
          Access on Blob/Clob columns of result set is no longer forwarded
          to the relevant method on the Blob/Clob object. Instead, it is
          made sure that the underlying Blob/Clob objects are freed when the
          access is completed.

          M java/client/org/apache/derby/client/am/Blob.java
          Make getBinaryStreamX() package private so that it can be used by
          Cursor. This way, conversions between SqlException and
          SQLException are avoided.

          M java/client/org/apache/derby/client/am/Clob.java
          Make get...StreamX methods package private so that it can be used
          by Cursor. This way, conversions between SqlException and
          SQLException are avoided.

          M java/client/org/apache/derby/client/am/BlobLocatorInputStream.java
          Add method setFreeBlobOnClose() which can be called in order to
          make the stream free the underlying Blob object when it is closed.

          M java/client/org/apache/derby/client/am/ResultSet.java
          Closing the result set should close streams that have been opened
          on columns of the result set.

          M java/client/org/apache/derby/client/am/UpdateSensitiveLOBLocatorInputStream.java
          Closing the stream should also close the wrapped stream.

          M java/client/org/apache/derby/client/am/ClobLocatorInputStream.java
          Add method setFreeClobOnClose() which can be called in order to
          make the stream free the underlying Clob object when it is closed.

          M java/client/org/apache/derby/client/am/ClobLocatorReader.java
          Add method setFreeClobOnClose() which can be called in order to
          make the stream free the underlying Clob object when it is closed.

          Show
          oysteing Øystein Grøvlen added a comment - Attached is a patch that solves the reported problem, and runs without errors in the current test suites (suites.All and derbyall). However, I do not feel confident that there is sufficient testing in this area to verify the fix. I do not have time to add more tests right now, but I hope to get back to this later. Note also that this fix may affect existing applications since from now only one get method per Blob/Clob column may now be called per row of the result set. The following describes the fixes in more detail: M java/testing/org/apache/derbyTesting/functionTests/tests/jdbcapi/LargeDataLocksTest.java Activate test also for client server to test the bug fix. M java/client/org/apache/derby/client/am/Cursor.java Access on Blob/Clob columns of result set is no longer forwarded to the relevant method on the Blob/Clob object. Instead, it is made sure that the underlying Blob/Clob objects are freed when the access is completed. M java/client/org/apache/derby/client/am/Blob.java Make getBinaryStreamX() package private so that it can be used by Cursor. This way, conversions between SqlException and SQLException are avoided. M java/client/org/apache/derby/client/am/Clob.java Make get...StreamX methods package private so that it can be used by Cursor. This way, conversions between SqlException and SQLException are avoided. M java/client/org/apache/derby/client/am/BlobLocatorInputStream.java Add method setFreeBlobOnClose() which can be called in order to make the stream free the underlying Blob object when it is closed. M java/client/org/apache/derby/client/am/ResultSet.java Closing the result set should close streams that have been opened on columns of the result set. M java/client/org/apache/derby/client/am/UpdateSensitiveLOBLocatorInputStream.java Closing the stream should also close the wrapped stream. M java/client/org/apache/derby/client/am/ClobLocatorInputStream.java Add method setFreeClobOnClose() which can be called in order to make the stream free the underlying Clob object when it is closed. M java/client/org/apache/derby/client/am/ClobLocatorReader.java Add method setFreeClobOnClose() which can be called in order to make the stream free the underlying Clob object when it is closed.
          Hide
          kmarsden Kathey Marsden added a comment -

          Øystein said.
          >Fixing this issue will create backward-compatibility issues. For a Blob/Clob column of a result set, only one get method >can be called and only once. For example, after executing ResultSet.getBinaryStream on a column, all following get >methods (e.g., getBlob, getBinaryStream, getBytes) on this column wil fail.

          I thought this was the case since DERBY-721 was fixed.

          Show
          kmarsden Kathey Marsden added a comment - Øystein said. >Fixing this issue will create backward-compatibility issues. For a Blob/Clob column of a result set, only one get method >can be called and only once. For example, after executing ResultSet.getBinaryStream on a column, all following get >methods (e.g., getBlob, getBinaryStream, getBytes) on this column wil fail. I thought this was the case since DERBY-721 was fixed.
          Hide
          oysteing Øystein Grøvlen added a comment -

          > Kathey Marsden commented on DERBY-2892:
          > ---------------------------------------
          >
          > Øystein said.
          >> Fixing this issue will create backward-compatibility issues. For a
          >> Blob/Clob column of a result set, only one get method can be
          >> called and only once. For example, after executing
          >> ResultSet.getBinaryStream on a column, all following get methods
          >> (e.g., getBlob, getBinaryStream, getBytes) on this column wil fail.
          >
          > I thought this was the case since DERBY-721 was fixed.

          I am afraid that some of these restrictions may have been relaxed by
          accident when we introduced locators for 10.3. Anyway, this patch will
          restrict usage even further than Derby-721. With a 10.2.1 server, I
          am able to do getBytes() followed by getBinaryStream(). With my patch
          this will not be possible since getBytes will release the underlying
          Blob. I will investigate the behavior of previous releases a bit
          further and report back.

          Show
          oysteing Øystein Grøvlen added a comment - > Kathey Marsden commented on DERBY-2892 : > --------------------------------------- > > Øystein said. >> Fixing this issue will create backward-compatibility issues. For a >> Blob/Clob column of a result set, only one get method can be >> called and only once. For example, after executing >> ResultSet.getBinaryStream on a column, all following get methods >> (e.g., getBlob, getBinaryStream, getBytes) on this column wil fail. > > I thought this was the case since DERBY-721 was fixed. I am afraid that some of these restrictions may have been relaxed by accident when we introduced locators for 10.3. Anyway, this patch will restrict usage even further than Derby-721. With a 10.2.1 server, I am able to do getBytes() followed by getBinaryStream(). With my patch this will not be possible since getBytes will release the underlying Blob. I will investigate the behavior of previous releases a bit further and report back.
          Hide
          oysteing Øystein Grøvlen added a comment -

          I have tested the patch with the repro attached here: http://www.nabble.com/Re%3A-Iterating-through-large-result-set-in-network-mode-causes-OutOfMemoryException-p15364393.html
          This repro does not access the blob columns that are selected, and it turns out that with the current patch, the blob objects are not released, and OOM error occurs. If I add a call to ResultSet#getBytes for each row, the OOM error does not occur.

          Show
          oysteing Øystein Grøvlen added a comment - I have tested the patch with the repro attached here: http://www.nabble.com/Re%3A-Iterating-through-large-result-set-in-network-mode-causes-OutOfMemoryException-p15364393.html This repro does not access the blob columns that are selected, and it turns out that with the current patch, the blob objects are not released, and OOM error occurs. If I add a call to ResultSet#getBytes for each row, the OOM error does not occur.
          Hide
          kmarsden Kathey Marsden added a comment -

          Do you think this change will make it in for 10.4?
          If we are going to change behavior we probably best do it on
          minor release boundaries.

          Show
          kmarsden Kathey Marsden added a comment - Do you think this change will make it in for 10.4? If we are going to change behavior we probably best do it on minor release boundaries.
          Hide
          oysteing Øystein Grøvlen added a comment -

          I agree, and I hope to get back to this within a week or so, but if
          others have the time to look at, please do so. As mentioned in my
          comment, I have discovered that my proposed fix will not work for the
          case where the select blob objects are never accessed by JDBC. That
          should be fixed, but the current patch will at least fix the more common
          scenarios. In addition, some more tests should be added.


          Øystein

          Show
          oysteing Øystein Grøvlen added a comment - I agree, and I hope to get back to this within a week or so, but if others have the time to look at, please do so. As mentioned in my comment, I have discovered that my proposed fix will not work for the case where the select blob objects are never accessed by JDBC. That should be fixed, but the current patch will at least fix the more common scenarios. In addition, some more tests should be added. – Øystein
          Hide
          kmarsden Kathey Marsden added a comment -

          I have a few cycles to help out with this issue. As I see it, what needs
          to be done is
          1) Apply patch to latest and make sure tests pass.
          2) Add release note that you cannot access a column twice with JDBC.
          3) Add more tests.

          • Add test to attempt to retrieve column twice and ensure reasonable error.
          • What other tests should be added?
            4) After checkin, file issue for lock timeout if column is not accessed by JDBC.
          Show
          kmarsden Kathey Marsden added a comment - I have a few cycles to help out with this issue. As I see it, what needs to be done is 1) Apply patch to latest and make sure tests pass. 2) Add release note that you cannot access a column twice with JDBC. 3) Add more tests. Add test to attempt to retrieve column twice and ensure reasonable error. What other tests should be added? 4) After checkin, file issue for lock timeout if column is not accessed by JDBC.
          Hide
          kristwaa Kristian Waagan added a comment -

          Kathey, I'm interested in looking at point 4. At least I believe it is the same problem. I used the repro Øystein mentioned above, where I got an OOME instead of a timeout.
          I coded a prototype that solved the problem on the client side, but I'm wondering if it can be better solved on the server side.
          I will look into this and report my findings.

          Show
          kristwaa Kristian Waagan added a comment - Kathey, I'm interested in looking at point 4. At least I believe it is the same problem. I used the repro Øystein mentioned above, where I got an OOME instead of a timeout. I coded a prototype that solved the problem on the client side, but I'm wondering if it can be better solved on the server side. I will look into this and report my findings.
          Hide
          kristwaa Kristian Waagan added a comment -

          I filed DERBY-3571 for the case where locators are not freed if the LOB columns are not accessed.
          Review and comments are welcome.

          I haven't tested it, but I think Øysteins patch and mine should go along fine together. They do however affect each other with respect to timing of the locator release and round trips to the server. We might want to investigate this with regards to performance.

          Show
          kristwaa Kristian Waagan added a comment - I filed DERBY-3571 for the case where locators are not freed if the LOB columns are not accessed. Review and comments are welcome. I haven't tested it, but I think Øysteins patch and mine should go along fine together. They do however affect each other with respect to timing of the locator release and round trips to the server. We might want to investigate this with regards to performance.
          Hide
          kmarsden Kathey Marsden added a comment - - edited

          Thanks Kristian for looking at the issue of the lock timeout when the column is not accessed via JDBC.

          For the DERBY-2892 patch, I think we probably need to work on the error message if a column is retrieved twice.
          What happens now if I do getString twice on a Clob column is I get the message:

          XJ217 - You cannot invoke other java.sql.Clob/java.sql.Blob methods after calling the free() method or after the Blob/Clob's transaction has been committed or rolled back.

          I don't know that that makes sense given the user called getString()

          For getCharacterStream()/getBinaryStream() there is no error on the call, just an IOException on the read.

          Show
          kmarsden Kathey Marsden added a comment - - edited Thanks Kristian for looking at the issue of the lock timeout when the column is not accessed via JDBC. For the DERBY-2892 patch, I think we probably need to work on the error message if a column is retrieved twice. What happens now if I do getString twice on a Clob column is I get the message: XJ217 - You cannot invoke other java.sql.Clob/java.sql.Blob methods after calling the free() method or after the Blob/Clob's transaction has been committed or rolled back. I don't know that that makes sense given the user called getString() For getCharacterStream()/getBinaryStream() there is no error on the call, just an IOException on the read.
          Hide
          kmarsden Kathey Marsden added a comment -

          I was looking at improving the error messages when a column is accessed
          more than once, and I realized that doing so will be much easier once the
          patch for DERBY-3571 is in, which records whether lob columns have
          been accessed.

          I would like to propose that we go ahead and commit the derby-2892 patch to
          trunk and then attack the improved error messages after DERBY-3571 goes
          in as a separate issue.

          Thoughts?

          Show
          kmarsden Kathey Marsden added a comment - I was looking at improving the error messages when a column is accessed more than once, and I realized that doing so will be much easier once the patch for DERBY-3571 is in, which records whether lob columns have been accessed. I would like to propose that we go ahead and commit the derby-2892 patch to trunk and then attack the improved error messages after DERBY-3571 goes in as a separate issue. Thoughts?
          Hide
          kristwaa Kristian Waagan added a comment -

          Note that DERBY-3571 will only track LOB locators in its current state.
          I think it can be extended to track all LOBs, but it is not clear to me at this point if that is the right way to proceed.
          Answering these questions would help determine that:
          a) Shall an exception be thrown for any type of column if accessed twice?
          b) If not, which columns types does the restriction apply to?

          I think a more general solution would be needed if an exception shall be raised for all column types.
          If a solution is needed only for LOBs, I believe LOBStateTracker can be extended and used for the purpose.

          Show
          kristwaa Kristian Waagan added a comment - Note that DERBY-3571 will only track LOB locators in its current state. I think it can be extended to track all LOBs, but it is not clear to me at this point if that is the right way to proceed. Answering these questions would help determine that: a) Shall an exception be thrown for any type of column if accessed twice? b) If not, which columns types does the restriction apply to? I think a more general solution would be needed if an exception shall be raised for all column types. If a solution is needed only for LOBs, I believe LOBStateTracker can be extended and used for the purpose.
          Hide
          kmarsden Kathey Marsden added a comment -

          From what I understand, the restriction is only for lob columns.
          If no one objects I will commit Oystein's patch to trunk on
          Monday and file an issue for the improved error messages.

          Show
          kmarsden Kathey Marsden added a comment - From what I understand, the restriction is only for lob columns. If no one objects I will commit Oystein's patch to trunk on Monday and file an issue for the improved error messages.
          Hide
          kmarsden Kathey Marsden added a comment -

          Attached is a first shot at a release note for this issue. Please let me know if you have comments.

          Show
          kmarsden Kathey Marsden added a comment - Attached is a first shot at a release note for this issue. Please let me know if you have comments.
          Hide
          kmarsden Kathey Marsden added a comment -

          committed this to trunk with revision 642974. I will backport to 10.4
          after tinderbox runs cleanly.

          Show
          kmarsden Kathey Marsden added a comment - committed this to trunk with revision 642974. I will backport to 10.4 after tinderbox runs cleanly.
          Hide
          kmarsden Kathey Marsden added a comment -

          committed to trunk and 10.4

          Show
          kmarsden Kathey Marsden added a comment - committed to trunk and 10.4
          Hide
          kristwaa Kristian Waagan added a comment -

          'derby-2892-1a-alternative_fix_partial.diff' demonstrates how the mechanism added by DERBY-3571 can be used to release the locators.
          It does not implement the required code cleanup, which would basically be to remove parts of the already committed patch. It allows multiple calls to all getter methods, but only a single call is allowed for the getXStream methods. This restriction is enforced independently of this patch. A solution disallowing multiple calls can easily be implemented on top of patch 1a.

          The advantage of the alternative implementation, is that it requires less code and that it might make the locator release a lot more efficient when the piggy-backing scheme is implemented.

          suites.All ran without failures (except for the constantly failing management test).
          Patch ready for comments.

          Show
          kristwaa Kristian Waagan added a comment - 'derby-2892-1a-alternative_fix_partial.diff' demonstrates how the mechanism added by DERBY-3571 can be used to release the locators. It does not implement the required code cleanup, which would basically be to remove parts of the already committed patch. It allows multiple calls to all getter methods, but only a single call is allowed for the getXStream methods. This restriction is enforced independently of this patch. A solution disallowing multiple calls can easily be implemented on top of patch 1a. The advantage of the alternative implementation, is that it requires less code and that it might make the locator release a lot more efficient when the piggy-backing scheme is implemented. suites.All ran without failures (except for the constantly failing management test). Patch ready for comments.
          Hide
          kmarsden Kathey Marsden added a comment -

          I think the approach in 'derby-2892-1a-alternative_fix_partial.diff' is preferable to the current approach because it does not introduce incompatibilities and can be backported to 10.3 where the bug still exists. I don't think we should disallow multiple calls unless we have to.

          Kathey

          Show
          kmarsden Kathey Marsden added a comment - I think the approach in 'derby-2892-1a-alternative_fix_partial.diff' is preferable to the current approach because it does not introduce incompatibilities and can be backported to 10.3 where the bug still exists. I don't think we should disallow multiple calls unless we have to. Kathey
          Hide
          kristwaa Kristian Waagan added a comment -

          FYI, some of our streams, for instance BlobLocatorInputStream, don't have a notion of being closed. It is therefore important that such streams are wrapped in a CloseFilterInputStream (and they are).
          I do not know why some streams ignore the close action, and I haven't verified if they are wrapped in each and every place they are used (outside the ResultSet class).

          Show
          kristwaa Kristian Waagan added a comment - FYI, some of our streams, for instance BlobLocatorInputStream, don't have a notion of being closed. It is therefore important that such streams are wrapped in a CloseFilterInputStream (and they are). I do not know why some streams ignore the close action, and I haven't verified if they are wrapped in each and every place they are used (outside the ResultSet class).
          Hide
          kristwaa Kristian Waagan added a comment -

          Reopening to commit alternative fix.
          I expect there will be two patches, one is ready. The next one will remove some of the new functionality added in the original patch.

          Show
          kristwaa Kristian Waagan added a comment - Reopening to commit alternative fix. I expect there will be two patches, one is ready. The next one will remove some of the new functionality added in the original patch.
          Hide
          kristwaa Kristian Waagan added a comment -

          'derby-2892-1b-alternative_fix_partial.diff' is slightly changed from 1a; reordered keywords (abstract public -> public abstract) and removed an added blank line.

          Committed to trunk with revsion 646255.
          I plan to backport the fix to 10.4 when the last patch has been committed, and also investigate the possibility of backporting to 10.3.

          Show
          kristwaa Kristian Waagan added a comment - 'derby-2892-1b-alternative_fix_partial.diff' is slightly changed from 1a; reordered keywords (abstract public -> public abstract) and removed an added blank line. Committed to trunk with revsion 646255. I plan to backport the fix to 10.4 when the last patch has been committed, and also investigate the possibility of backporting to 10.3.
          Hide
          kristwaa Kristian Waagan added a comment -

          Forgot to say, but I ran the repro Øystein mentioned with patch 1b applied and Xmx set to 48m without getting OOMEs.

          'derby-2892-2a-alternative_fix_cleanup.diff' removes code that was added to support the first solution.
          Need to verify if I have forgotten something.

          Patch ready for review.

          Show
          kristwaa Kristian Waagan added a comment - Forgot to say, but I ran the repro Øystein mentioned with patch 1b applied and Xmx set to 48m without getting OOMEs. 'derby-2892-2a-alternative_fix_cleanup.diff' removes code that was added to support the first solution. Need to verify if I have forgotten something. Patch ready for review.
          Hide
          dyret Dyre Tjeldvoll added a comment -

          Patch derby-2892-1b-alternative_fix_partial.diff merged to 10.4.
          Committed revision 647465.

          Show
          dyret Dyre Tjeldvoll added a comment - Patch derby-2892-1b-alternative_fix_partial.diff merged to 10.4. Committed revision 647465.
          Hide
          kristwaa Kristian Waagan added a comment -

          Thanks for backporting the fix Dyre.

          I committed the last cleanup patch (2a) to trunk with revision 647680 and merged it into 10.4 with revision 647682.
          I'm not planning on any more fixes under this issue.

          Can the release note needed flag be unset now?

          Show
          kristwaa Kristian Waagan added a comment - Thanks for backporting the fix Dyre. I committed the last cleanup patch (2a) to trunk with revision 647680 and merged it into 10.4 with revision 647682. I'm not planning on any more fixes under this issue. Can the release note needed flag be unset now?
          Hide
          kmarsden Kathey Marsden added a comment -

          Unsetting the release note needed and existing application impact flags as this issue's fix no longer impacts existing applications.

          I am leaving this issue open to port to 10.3

          Show
          kmarsden Kathey Marsden added a comment - Unsetting the release note needed and existing application impact flags as this issue's fix no longer impacts existing applications. I am leaving this issue open to port to 10.3
          Hide
          kmarsden Kathey Marsden added a comment -

          I attempted to merge this fix to 10.3 Attached is the patch for my first try (derby-2892_10.3_try1_diff.txt not for commit). I ported 643091,646255, and 647680 from trunk, but something is not quite right, because I still am getting locks held. LargeDataLocksTest fails with

          1) testGetBinaryStream(org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest)junit.framework.AssertionF
          ailedError: expected:<0> but was:<2>
          at org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest.testGetBinaryStream(LargeDataLocksTest
          .java:114)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:88)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:23)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:23)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:23)

          Oddly testCharacterStream seems to be ok. Investigating what could be the matter.

          Kathey

          Show
          kmarsden Kathey Marsden added a comment - I attempted to merge this fix to 10.3 Attached is the patch for my first try (derby-2892_10.3_try1_diff.txt not for commit). I ported 643091,646255, and 647680 from trunk, but something is not quite right, because I still am getting locks held. LargeDataLocksTest fails with 1) testGetBinaryStream(org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest)junit.framework.AssertionF ailedError: expected:<0> but was:<2> at org.apache.derbyTesting.functionTests.tests.jdbcapi.LargeDataLocksTest.testGetBinaryStream(LargeDataLocksTest .java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:88) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:23) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:23) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:23) Oddly testCharacterStream seems to be ok. Investigating what could be the matter. Kathey
          Hide
          kmarsden Kathey Marsden added a comment -

          I found my problem. My client was way out of date because I had reverted to a very old version to track down a regression. I think I was missing the fix for DERBY-3098. Sorry for the noise. I'll run the full set of tests and commit if all goes well.

          Kathey

          Show
          kmarsden Kathey Marsden added a comment - I found my problem. My client was way out of date because I had reverted to a very old version to track down a regression. I think I was missing the fix for DERBY-3098 . Sorry for the noise. I'll run the full set of tests and commit if all goes well. Kathey
          Hide
          kristwaa Kristian Waagan added a comment -

          Good to hear.
          I'm open to look into any issues you might find in this area after your latest test cycle has completed.

          Show
          kristwaa Kristian Waagan added a comment - Good to hear. I'm open to look into any issues you might find in this area after your latest test cycle has completed.
          Hide
          kmarsden Kathey Marsden added a comment -

          Resolving for 10.3.2.2. Assigning to Kristian since we went with his fix in the end.

          Show
          kmarsden Kathey Marsden added a comment - Resolving for 10.3.2.2. Assigning to Kristian since we went with his fix in the end.

            People

            • Assignee:
              kristwaa Kristian Waagan
              Reporter:
              tomes Thomas Niessen
            • Votes:
              4 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development