Derby
  1. Derby
  2. DERBY-4677

SYSCS_COMPRESS_TABLE disables unique constraints

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 10.4.1.3, 10.4.2.0, 10.4.2.1, 10.5.1.1, 10.5.2.0, 10.5.3.0, 10.6.1.0
    • Fix Version/s: 10.4.2.1, 10.5.3.1, 10.6.2.1, 10.7.1.1
    • Component/s: SQL
    • Labels:
      None
    • Environment:
    • Issue & fix info:
      Release Note Needed, Repro attached
    • Bug behavior facts:
      Data corruption, Deviation from standard, Seen in production

      Description

      It appears that running SYSCS_UTIL.SYSCS_COMPRESS_TABLE on a table with a null-able unique constraint will disable the unique constraint. The script
      below should reproduce the problem. The expected behaviour is for the second insert to fail due to the unique constraint but instead it is allowed. The second insert will fail as expected if either the call to SYSCS_COMPRESS_TABLE is skipped or if the column is declared NOT NULL.

      I have reproduced the problem using embedded Derby 10.5.1.1, 10.5.3.0 and 10.6.1.0 using ij.

      CREATE TABLE TABLE1(NAME1 INT UNIQUE);

      CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP', 'TABLE1', 1);

      INSERT INTO TABLE1(NAME1) VALUES(1);

      INSERT INTO TABLE1(NAME1) VALUES(1);

      SELECT * FROM TABLE1;

      DROP TABLE TABLE1;

      1. DERBY4677_diff_patch1.txt
        9 kB
        Mamta A. Satoor
      2. DERBY4677_stat_patch1.txt
        0.4 kB
        Mamta A. Satoor
      3. releaseNote.html
        4 kB
        Lily Wei
      4. releaseNote.html
        4 kB
        Mamta A. Satoor
      5. releaseNote.html
        4 kB
        Mamta A. Satoor
      6. releaseNote.html
        4 kB
        Mamta A. Satoor

        Activity

        Brett Mason created issue -
        Hide
        Mamta A. Satoor added a comment -

        I see that this fails even with 10.4.2.1 Prior to 10.4, it was to define a unique constraint, one also was required to define it not null column. So, the above test case does not work prior to 10.4 unless the create table is defined as CREATE TABLE TABLE1(NAME1 INT UNIQUE NOT NULL);

        But starting with 10.4, the above test case indeed fails. It has something to do with allow a column to be unique without requiring it to be non-null. Haven't looked at the cause of failure in the code yet.

        Show
        Mamta A. Satoor added a comment - I see that this fails even with 10.4.2.1 Prior to 10.4, it was to define a unique constraint, one also was required to define it not null column. So, the above test case does not work prior to 10.4 unless the create table is defined as CREATE TABLE TABLE1(NAME1 INT UNIQUE NOT NULL); But starting with 10.4, the above test case indeed fails. It has something to do with allow a column to be unique without requiring it to be non-null. Haven't looked at the cause of failure in the code yet.
        Mamta A. Satoor made changes -
        Field Original Value New Value
        Affects Version/s 10.4.2.1 [ 12313401 ]
        Mamta A. Satoor made changes -
        Assignee Mamta A. Satoor [ mamtas ]
        Hide
        Mamta A. Satoor added a comment -

        Just to be little more clear, if the index is non-nullable, then compress does not break it. I tested this on 10.1 and onwards upto trunk, So the behavior for non-nullable indexes hasn't changed.

        Nullable indexes on the other hand do not work after compress. Nullable indexes must have been introduced in 10.4 because such an index can't be created prior to 10.4. But doing a compress on table with nullable index appear to break those indexes. I will look into the code to see what is going on.

        Show
        Mamta A. Satoor added a comment - Just to be little more clear, if the index is non-nullable, then compress does not break it. I tested this on 10.1 and onwards upto trunk, So the behavior for non-nullable indexes hasn't changed. Nullable indexes on the other hand do not work after compress. Nullable indexes must have been introduced in 10.4 because such an index can't be created prior to 10.4. But doing a compress on table with nullable index appear to break those indexes. I will look into the code to see what is going on.
        Hide
        Mamta A. Satoor added a comment -

        For compress table, code in AlterTableConstantAction kicks in to drop and recreate the indexes. When recreating the indexes, we forgot to take into account the property "uniqueWithDuplicateNulls" when asking store to recreate the index. For a new nullable index, CreateIndexConstantAction has following code for dbs at 10.4 and higher levels
        indexProperties.put( "uniqueWithDuplicateNulls", Boolean.toString(true));
        But in AlterTableConstantAction, we did not check if the index is of type nullable unique and never set the property
        "uniqueWithDuplicateNulls".

        Following simple code change in AlterTableConstantActionhas fixed the problem, but the bigger issue is there might be databases lying around with duplicate values when the user thought that unique indexes were defined on their tables.
        if(cd.getIndexDescriptor().isUniqueWithDuplicateNulls())

        { properties.put( "uniqueWithDuplicateNulls", Boolean.toString(true)); }

        The other troubling issue is are there other places in the code where we have forgotten about this property while recreating the indexes.

        Show
        Mamta A. Satoor added a comment - For compress table, code in AlterTableConstantAction kicks in to drop and recreate the indexes. When recreating the indexes, we forgot to take into account the property "uniqueWithDuplicateNulls" when asking store to recreate the index. For a new nullable index, CreateIndexConstantAction has following code for dbs at 10.4 and higher levels indexProperties.put( "uniqueWithDuplicateNulls", Boolean.toString(true)); But in AlterTableConstantAction, we did not check if the index is of type nullable unique and never set the property "uniqueWithDuplicateNulls". Following simple code change in AlterTableConstantActionhas fixed the problem, but the bigger issue is there might be databases lying around with duplicate values when the user thought that unique indexes were defined on their tables. if(cd.getIndexDescriptor().isUniqueWithDuplicateNulls()) { properties.put( "uniqueWithDuplicateNulls", Boolean.toString(true)); } The other troubling issue is are there other places in the code where we have forgotten about this property while recreating the indexes.
        Hide
        Mike Matrigali added a comment -

        What happened was that store basically was instructed to build a non-unique index, and it did so. I agree with the proposed fix, and will review when a patch is submitted.

        Note that this fix only will correct the issue for future compress calls. Whatever has happened in the past will remain broken, until the user takes
        some action. It will be hard to determine if the broken index exists. One
        can run a query to see if a candidate index exists (basically is there a unique
        index that contains nullable columns), but I can't think of a generic way to
        tell if compress has been run and broken the index.

        I believe if the user has not inserted any duplicates, than another call to offline compress using a version with the bug will fix the problem.

        If the user has inserted duplicate data into the index after compress broke
        the index, then automatic cleanup is not going to work. Once a fix has
        been proposed we should verify the error that arises in this case. And also verify that in this case the compress gets at least a statement level error backing out whatever the intermediate work that was done before failing. Maybe
        we can even catch the exception at appropriate places in the code and
        raise a new error giving instructions on what needs to be done.

        Show
        Mike Matrigali added a comment - What happened was that store basically was instructed to build a non-unique index, and it did so. I agree with the proposed fix, and will review when a patch is submitted. Note that this fix only will correct the issue for future compress calls. Whatever has happened in the past will remain broken, until the user takes some action. It will be hard to determine if the broken index exists. One can run a query to see if a candidate index exists (basically is there a unique index that contains nullable columns), but I can't think of a generic way to tell if compress has been run and broken the index. I believe if the user has not inserted any duplicates, than another call to offline compress using a version with the bug will fix the problem. If the user has inserted duplicate data into the index after compress broke the index, then automatic cleanup is not going to work. Once a fix has been proposed we should verify the error that arises in this case. And also verify that in this case the compress gets at least a statement level error backing out whatever the intermediate work that was done before failing. Maybe we can even catch the exception at appropriate places in the code and raise a new error giving instructions on what needs to be done.
        Mike Matrigali made changes -
        Affects Version/s 10.5.2.0 [ 12314116 ]
        Affects Version/s 10.4.2.0 [ 12313345 ]
        Affects Version/s 10.4.1.3 [ 12313111 ]
        Component/s SQL [ 11408 ]
        Hide
        Mike Matrigali added a comment -

        I did a quick search in the execute directory and there looks like there are two other cases in InsertResultSet, involving the bulk insert and bulk replace optimizations. It would be good if test cases for these paths were also generated. I think this means that we could also have broke the indexes on a insert as select into an empty table.

        I looked at all the other places in the code that set "nUniqueColumns" and did not see any other problems.

        Show
        Mike Matrigali added a comment - I did a quick search in the execute directory and there looks like there are two other cases in InsertResultSet, involving the bulk insert and bulk replace optimizations. It would be good if test cases for these paths were also generated. I think this means that we could also have broke the indexes on a insert as select into an empty table. I looked at all the other places in the code that set "nUniqueColumns" and did not see any other problems.
        Hide
        Mamta A. Satoor added a comment -

        I wanted to try out the case where because of the bug in Derby, you are able to insert duplicate rows after compress table even though the table has unique nullable index defined on it. Next, I fixed the Derby code to properly copy the unique nullable property during index recreation time of the compress table process. I ran the buggy database with the new Derby code on my machine. First I inserted more duplicate rows(because store still thinks the index is not unique from the compress earlier with the buggy Derby version), and then did compress on the table. After this compress, I can't insert any more duplicate row. Interestingly enough though, no errors were raised during compress because of the duplicate rows. I will look into the code why we do not run into problems during index recreation when there are duplicate rows in the table.

        Following are the exact steps followed for this testing

        Build Derby code based on the trunk codeline. This Derby version has the bug which allows duplicate rows after compress table. Use this buggy Derby version to run following ij scripts which shows that duplicate rows are allowed
        $ java -Dij.exceptionTrace=true org.apache.derby.tools.ij
        ij version 10.7
        ij> connect 'jdbc:derby:testDB4677;create=true';
        ij> CREATE TABLE TABLE1(NAME1 INT UNIQUE);
        0 rows inserted/updated/deleted
        ij> CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP', 'TABLE1', 1);
        0 rows inserted/updated/deleted
        ij> INSERT INTO TABLE1(NAME1) VALUES(1);
        1 row inserted/updated/deleted
        ij> INSERT INTO TABLE1(NAME1) VALUES(1);
        1 row inserted/updated/deleted
        ij> SELECT * FROM TABLE1;
        NAME1
        -----------
        1
        1

        2 rows selected
        ij> exit;

        Notice the duplicate rows in TABLE1 in the above ij session.

        Next, in my trunk codeline, fix AlterTableConstantAction to use the unique nullable property during index recreation if the index was defined unique nullable. This is done when the table is compressed.

        When I run the above db with this fixed code, it first allows duplicate row creations because I have not yet run compress table.
        $ java -Dij.exceptionTrace=true org.apache.derby.tools.ij
        ij version 10.7
        ij> connect 'jdbc:derby:testDB4677;create=true';
        WARNING 01J01: Database 'testDB4677' not created, connection made to existing da
        tabase instead.
        ij> INSERT INTO TABLE1(NAME1) VALUES(1);
        1 row inserted/updated/deleted
        ij> SELECT * FROM TABLE1;
        NAME1
        -----------
        1
        1
        1

        3 rows selected

        But once I do compress table, now we start disallowing creation of further duplicate rows.
        ij> CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP', 'TABLE1', 1);
        0 rows inserted/updated/deleted
        ij> SELECT * FROM TABLE1;
        NAME1
        -----------
        1
        1
        1

        3 rows selected
        ij> INSERT INTO TABLE1(NAME1) VALUES(1);
        ERROR 23505: The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL100603220123640' defined on 'TABLE1'.

        But I had thought that with my changes, the compress table operation would fail because of the duplicate rows but it does not fail. I will look further into this.

        Show
        Mamta A. Satoor added a comment - I wanted to try out the case where because of the bug in Derby, you are able to insert duplicate rows after compress table even though the table has unique nullable index defined on it. Next, I fixed the Derby code to properly copy the unique nullable property during index recreation time of the compress table process. I ran the buggy database with the new Derby code on my machine. First I inserted more duplicate rows(because store still thinks the index is not unique from the compress earlier with the buggy Derby version), and then did compress on the table. After this compress, I can't insert any more duplicate row. Interestingly enough though, no errors were raised during compress because of the duplicate rows. I will look into the code why we do not run into problems during index recreation when there are duplicate rows in the table. Following are the exact steps followed for this testing Build Derby code based on the trunk codeline. This Derby version has the bug which allows duplicate rows after compress table. Use this buggy Derby version to run following ij scripts which shows that duplicate rows are allowed $ java -Dij.exceptionTrace=true org.apache.derby.tools.ij ij version 10.7 ij> connect 'jdbc:derby:testDB4677;create=true'; ij> CREATE TABLE TABLE1(NAME1 INT UNIQUE); 0 rows inserted/updated/deleted ij> CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP', 'TABLE1', 1); 0 rows inserted/updated/deleted ij> INSERT INTO TABLE1(NAME1) VALUES(1); 1 row inserted/updated/deleted ij> INSERT INTO TABLE1(NAME1) VALUES(1); 1 row inserted/updated/deleted ij> SELECT * FROM TABLE1; NAME1 ----------- 1 1 2 rows selected ij> exit; Notice the duplicate rows in TABLE1 in the above ij session. Next, in my trunk codeline, fix AlterTableConstantAction to use the unique nullable property during index recreation if the index was defined unique nullable. This is done when the table is compressed. When I run the above db with this fixed code, it first allows duplicate row creations because I have not yet run compress table. $ java -Dij.exceptionTrace=true org.apache.derby.tools.ij ij version 10.7 ij> connect 'jdbc:derby:testDB4677;create=true'; WARNING 01J01: Database 'testDB4677' not created, connection made to existing da tabase instead. ij> INSERT INTO TABLE1(NAME1) VALUES(1); 1 row inserted/updated/deleted ij> SELECT * FROM TABLE1; NAME1 ----------- 1 1 1 3 rows selected But once I do compress table, now we start disallowing creation of further duplicate rows. ij> CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP', 'TABLE1', 1); 0 rows inserted/updated/deleted ij> SELECT * FROM TABLE1; NAME1 ----------- 1 1 1 3 rows selected ij> INSERT INTO TABLE1(NAME1) VALUES(1); ERROR 23505: The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL100603220123640' defined on 'TABLE1'. But I had thought that with my changes, the compress table operation would fail because of the duplicate rows but it does not fail. I will look further into this.
        Hide
        Mike Matrigali added a comment -

        I thought it would fail also, maybe compress is optimized to not do the duplicate checking on index rebuild since it "knows" that there were no duplicates before. I would concentrate on fixing the bugs first. This behavior is probably not worth changing and we should come up with some other way to catch the bad duplicates that were inserted. One option would be a real drop and recreate of the index should find the problem.

        To track this down you could compare the code that is in create index to the code that is in alter table. I think there is some call back that is used to check duplicates.

        Show
        Mike Matrigali added a comment - I thought it would fail also, maybe compress is optimized to not do the duplicate checking on index rebuild since it "knows" that there were no duplicates before. I would concentrate on fixing the bugs first. This behavior is probably not worth changing and we should come up with some other way to catch the bad duplicates that were inserted. One option would be a real drop and recreate of the index should find the problem. To track this down you could compare the code that is in create index to the code that is in alter table. I think there is some call back that is used to check duplicates.
        Hide
        Mamta A. Satoor added a comment -

        I researched the code Create Index code and Alter Table code and found that for Create Index, we generate different SortObserver(s) depending on the type of the index. If it is a unique non-nullable index, then Create index creates UniqueIndexSortObserver. For a unique nullable index, it creates UniqueWithDuplicateNullsIndexSortObserver. For non-unique indexes, BasicSortObserver gets created. Alter Table on the other hand creates BasicSortObserver for all kinds of indexes and that is why the duplicates don't get caught for unique indexes. As Mike suggested, this may have been a conscious decision since the assumption was made that there were no duplicates before.

        For now, I will leave this assumption as it is. I will go and focus back on two InsertResultSet cases where we don't seem to be using uniqueWithDuplicateNulls.

        Show
        Mamta A. Satoor added a comment - I researched the code Create Index code and Alter Table code and found that for Create Index, we generate different SortObserver(s) depending on the type of the index. If it is a unique non-nullable index, then Create index creates UniqueIndexSortObserver. For a unique nullable index, it creates UniqueWithDuplicateNullsIndexSortObserver. For non-unique indexes, BasicSortObserver gets created. Alter Table on the other hand creates BasicSortObserver for all kinds of indexes and that is why the duplicates don't get caught for unique indexes. As Mike suggested, this may have been a conscious decision since the assumption was made that there were no duplicates before. For now, I will leave this assumption as it is. I will go and focus back on two InsertResultSet cases where we don't seem to be using uniqueWithDuplicateNulls.
        Hide
        Mamta A. Satoor added a comment -

        Attachine patch, DERBY4677_diff_patch1.txt, which involves transferring unique nullable properties from sql layer to store layer. System catalogs have the information correct, but unique nullability information was not getting transferred to store during btree recreation in case of compress table and bulk insert operations. Please share any comments you may have. Thanks

        Show
        Mamta A. Satoor added a comment - Attachine patch, DERBY4677_diff_patch1.txt, which involves transferring unique nullable properties from sql layer to store layer. System catalogs have the information correct, but unique nullability information was not getting transferred to store during btree recreation in case of compress table and bulk insert operations. Please share any comments you may have. Thanks
        Mamta A. Satoor made changes -
        Attachment DERBY4677_diff_patch1.txt [ 12446809 ]
        Attachment DERBY4677_stat_patch1.txt [ 12446810 ]
        Hide
        Mamta A. Satoor added a comment -

        Forgot to mention that I got one failure when I ran junit All suite. When I ran the replication suite by itself, I didn't see the failure again. Not sure if the following failure is a known intermittent failure. derbyall ran fine with no errors
        1) testReplication_Local_1_Indexing(org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun_Local_1Indexing)junit.framework.ComparisonFailure: Unexpected SQL state. expected:<...0> but was:<...1>
        at org.apache.derbyTesting.junit.BaseJDBCTestCase.assertSQLState(BaseJDBCTestCase.java:762)
        at org.apache.derbyTesting.junit.BaseJDBCTestCase.assertSQLState(BaseJDBCTestCase.java:811)
        at org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun.failOver_direct(ReplicationRun.java:1385)
        at org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun.failOver(ReplicationRun.java:1314)
        at org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun_Local_1Indexing.testReplication_Local_1_Indexing(ReplicationRun_Local_1Indexing.java:97)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
        at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:109)
        at org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun.runBare(ReplicationRun.java:207)
        at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
        at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
        at junit.extensions.TestSetup.run(TestSetup.java:16)
        Caused by: java.sql.SQLException: DERBY SQL error: SQLCODE: -1, SQLSTATE: XRE21, SQLERRMC: Error occurred while performing failover for database 'C:\p4clients\svnmain\client4\trunk\systest\out142\db_master\wombat', Failover attempt was aborted.::SQLSTATE: XRE04Connection lost for replicated database 'null'.
        at org.apache.derby.client.am.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:96)
        at org.apache.derby.client.am.SqlException.getSQLException(SqlException.java:358)
        at org.apache.derby.jdbc.ClientDriver.connect(ClientDriver.java:149)
        at java.sql.DriverManager.getConnection(DriverManager.java:317)
        at java.sql.DriverManager.getConnection(DriverManager.java:273)
        at org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun.failOver_direct(ReplicationRun.java:1372)
        ... 31 more
        Caused by: org.apache.derby.client.am.SqlException: DERBY SQL error: SQLCODE: -1, SQLSTATE: XRE21, SQLERRMC: Error occurred while performing failover for database 'C:\p4clients\svnmain\client4\trunk\systest\out142\db_master\wombat', Failover attempt was aborted.::SQLSTATE: XRE04Connection lost for replicated database 'null'.
        at org.apache.derby.client.am.Connection.completeSqlca(Connection.java:2118)
        at org.apache.derby.client.net.NetConnectionReply.parseRdbAccessFailed(NetConnectionReply.java:541)
        at org.apache.derby.client.net.NetConnectionReply.parseAccessRdbError(NetConnectionReply.java:434)
        at org.apache.derby.client.net.NetConnectionReply.parseACCRDBreply(NetConnectionReply.java:297)
        at org.apache.derby.client.net.NetConnectionReply.readAccessDatabase(NetConnectionReply.java:121)
        at org.apache.derby.client.net.NetConnection.readSecurityCheckAndAccessRdb(NetConnection.java:834)
        at org.apache.derby.client.net.NetConnection.flowSecurityCheckAndAccessRdb(NetConnection.java:758)
        at org.apache.derby.client.net.NetConnection.flowUSRIDONLconnect(NetConnection.java:591)
        at org.apache.derby.client.net.NetConnection.flowConnect(NetConnection.java:398)
        at org.apache.derby.client.net.NetConnection.<init>(NetConnection.java:218)
        at org.apache.derby.client.net.NetConnection40.<init>(NetConnection40.java:77)
        at org.apache.derby.client.net.ClientJDBCObjectFactoryImpl40.newNetConnection(ClientJDBCObjectFactoryImpl40.java:269)
        at org.apache.derby.jdbc.ClientDriver.connect(ClientDriver.java:83)
        ... 34 more

        Show
        Mamta A. Satoor added a comment - Forgot to mention that I got one failure when I ran junit All suite. When I ran the replication suite by itself, I didn't see the failure again. Not sure if the following failure is a known intermittent failure. derbyall ran fine with no errors 1) testReplication_Local_1_Indexing(org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun_Local_1Indexing)junit.framework.ComparisonFailure: Unexpected SQL state. expected:<...0> but was:<...1> at org.apache.derbyTesting.junit.BaseJDBCTestCase.assertSQLState(BaseJDBCTestCase.java:762) at org.apache.derbyTesting.junit.BaseJDBCTestCase.assertSQLState(BaseJDBCTestCase.java:811) at org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun.failOver_direct(ReplicationRun.java:1385) at org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun.failOver(ReplicationRun.java:1314) at org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun_Local_1Indexing.testReplication_Local_1_Indexing(ReplicationRun_Local_1Indexing.java:97) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:109) at org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun.runBare(ReplicationRun.java:207) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16) Caused by: java.sql.SQLException: DERBY SQL error: SQLCODE: -1, SQLSTATE: XRE21, SQLERRMC: Error occurred while performing failover for database 'C:\p4clients\svnmain\client4\trunk\systest\out142\db_master\wombat', Failover attempt was aborted.::SQLSTATE: XRE04Connection lost for replicated database 'null'. at org.apache.derby.client.am.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:96) at org.apache.derby.client.am.SqlException.getSQLException(SqlException.java:358) at org.apache.derby.jdbc.ClientDriver.connect(ClientDriver.java:149) at java.sql.DriverManager.getConnection(DriverManager.java:317) at java.sql.DriverManager.getConnection(DriverManager.java:273) at org.apache.derbyTesting.functionTests.tests.replicationTests.ReplicationRun.failOver_direct(ReplicationRun.java:1372) ... 31 more Caused by: org.apache.derby.client.am.SqlException: DERBY SQL error: SQLCODE: -1, SQLSTATE: XRE21, SQLERRMC: Error occurred while performing failover for database 'C:\p4clients\svnmain\client4\trunk\systest\out142\db_master\wombat', Failover attempt was aborted.::SQLSTATE: XRE04Connection lost for replicated database 'null'. at org.apache.derby.client.am.Connection.completeSqlca(Connection.java:2118) at org.apache.derby.client.net.NetConnectionReply.parseRdbAccessFailed(NetConnectionReply.java:541) at org.apache.derby.client.net.NetConnectionReply.parseAccessRdbError(NetConnectionReply.java:434) at org.apache.derby.client.net.NetConnectionReply.parseACCRDBreply(NetConnectionReply.java:297) at org.apache.derby.client.net.NetConnectionReply.readAccessDatabase(NetConnectionReply.java:121) at org.apache.derby.client.net.NetConnection.readSecurityCheckAndAccessRdb(NetConnection.java:834) at org.apache.derby.client.net.NetConnection.flowSecurityCheckAndAccessRdb(NetConnection.java:758) at org.apache.derby.client.net.NetConnection.flowUSRIDONLconnect(NetConnection.java:591) at org.apache.derby.client.net.NetConnection.flowConnect(NetConnection.java:398) at org.apache.derby.client.net.NetConnection.<init>(NetConnection.java:218) at org.apache.derby.client.net.NetConnection40.<init>(NetConnection40.java:77) at org.apache.derby.client.net.ClientJDBCObjectFactoryImpl40.newNetConnection(ClientJDBCObjectFactoryImpl40.java:269) at org.apache.derby.jdbc.ClientDriver.connect(ClientDriver.java:83) ... 34 more
        Hide
        Mike Matrigali added a comment -

        I have reviewed the DERBY4677_diff_patch1.txt patch, and the fix and the tests look good to me. +1 on committing it to trunk and after it passes tests across platforms backporting as appropriate.

        There should be a good release note describing the corruption and that the fix does not actually fix the existing problems, just stops it from happening in the future. Seems like a drop and recreate of unique indexes on nullable columns is only guaranteed way to fix existing issues. And the recreate index will fail if the bug has allowed unique violations. There may be a query one can write to identify unique violations, but it may be tricky as DERBY may optimize based on "knowing" there can't be unique violations.

        Show
        Mike Matrigali added a comment - I have reviewed the DERBY4677_diff_patch1.txt patch, and the fix and the tests look good to me. +1 on committing it to trunk and after it passes tests across platforms backporting as appropriate. There should be a good release note describing the corruption and that the fix does not actually fix the existing problems, just stops it from happening in the future. Seems like a drop and recreate of unique indexes on nullable columns is only guaranteed way to fix existing issues. And the recreate index will fail if the bug has allowed unique violations. There may be a query one can write to identify unique violations, but it may be tricky as DERBY may optimize based on "knowing" there can't be unique violations.
        Mamta A. Satoor made changes -
        Issue & fix info [Repro attached] [Release Note Needed, Repro attached]
        Hide
        Mamta A. Satoor added a comment -

        Attaching the release note. Please let me know if there are any comments to it.

        Show
        Mamta A. Satoor added a comment - Attaching the release note. Please let me know if there are any comments to it.
        Mamta A. Satoor made changes -
        Attachment releaseNote.html [ 12447055 ]
        Hide
        Mamta A. Satoor added a comment -

        Committed changes into trunk(revision 954544) and 10.5(revision 955487) so far. Working on backporting to other affected codelines.

        Show
        Mamta A. Satoor added a comment - Committed changes into trunk(revision 954544) and 10.5(revision 955487) so far. Working on backporting to other affected codelines.
        Mamta A. Satoor made changes -
        Fix Version/s 10.5.3.1 [ 12314182 ]
        Fix Version/s 10.7.0.0 [ 12314971 ]
        Hide
        Mamta A. Satoor added a comment -

        Backported change into 10.6 codeline with revision 955790.

        Show
        Mamta A. Satoor added a comment - Backported change into 10.6 codeline with revision 955790.
        Mamta A. Satoor made changes -
        Fix Version/s 10.6.1.1 [ 12314973 ]
        Mamta A. Satoor made changes -
        Fix Version/s 10.4.2.1 [ 12313401 ]
        Hide
        Mamta A. Satoor added a comment -

        Attaching an updated release note. Please review to see if you have any comments.

        Also, a little bit more information on the stored procedures that would cause this bug to allow duplicates.
        A unique nullable index can be defined in one of the following 2 ways
        CREATE TABLE TABLE1(NAME1 INT UNIQUE);
        OR
        CREATE TABLE TABLE1(NAME1 INT constraint uniq_null UNIQUE);

        A table compress can be done using procedure. SYSCS_UTIL.SYSCS_COMPRESS_TABLE. eg
        CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP', 'TABLE1', 1)

        A bulk insert into a table can be done using procedure SYSCS_UTIL.SYSCS_IMPORT_TABLE. The bulk insert can either REPLACE the existing contents of the table or it can add to the existing contents of the table. The intention of which bulk insert is intended is indicated by the last parameter to the procedure call.
        eg of bulk insert with REPLACE option
        call SYSCS_UTIL.SYSCS_IMPORT_TABLE('APP', 'TABLE1' , 'data.txt' , null, null, 'utf-8', 1)
        eg of bulk insert with INSERT option
        call SYSCS_UTIL.SYSCS_IMPORT_TABLE('APP', 'TABLE1' , 'data.txt' , null, null, 'utf-8', 0)
        Bulk insert caused problems with unique nullable indexes only for the following 2 scenarios
        1)When bulk insert with REPLACE is being done with 0 rows from the data file. It does not matter if the table was empty before bulk insert with REPLACE.
        2)When bulk insert with INSERT is being done on an empty table with multiple rows from the data file.

        Show
        Mamta A. Satoor added a comment - Attaching an updated release note. Please review to see if you have any comments. Also, a little bit more information on the stored procedures that would cause this bug to allow duplicates. A unique nullable index can be defined in one of the following 2 ways CREATE TABLE TABLE1(NAME1 INT UNIQUE); OR CREATE TABLE TABLE1(NAME1 INT constraint uniq_null UNIQUE); A table compress can be done using procedure. SYSCS_UTIL.SYSCS_COMPRESS_TABLE. eg CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('APP', 'TABLE1', 1) A bulk insert into a table can be done using procedure SYSCS_UTIL.SYSCS_IMPORT_TABLE. The bulk insert can either REPLACE the existing contents of the table or it can add to the existing contents of the table. The intention of which bulk insert is intended is indicated by the last parameter to the procedure call. eg of bulk insert with REPLACE option call SYSCS_UTIL.SYSCS_IMPORT_TABLE('APP', 'TABLE1' , 'data.txt' , null, null, 'utf-8', 1) eg of bulk insert with INSERT option call SYSCS_UTIL.SYSCS_IMPORT_TABLE('APP', 'TABLE1' , 'data.txt' , null, null, 'utf-8', 0) Bulk insert caused problems with unique nullable indexes only for the following 2 scenarios 1)When bulk insert with REPLACE is being done with 0 rows from the data file. It does not matter if the table was empty before bulk insert with REPLACE. 2)When bulk insert with INSERT is being done on an empty table with multiple rows from the data file.
        Mamta A. Satoor made changes -
        Attachment releaseNote.html [ 12447606 ]
        Hide
        Knut Anders Hatlen added a comment -

        Hi Mamta,

        The release note looks good to me. Thanks for writing it. Two small comments:

        • I think the summary is supposed to be a one-liner. I suggest that it is changed to "Unique nullable constraint may be disabled after compress or import operations" and that the details are moved to the sections below.
        • typo: "may find that duplicates rows exist" - duplicates -> duplicate
        Show
        Knut Anders Hatlen added a comment - Hi Mamta, The release note looks good to me. Thanks for writing it. Two small comments: I think the summary is supposed to be a one-liner. I suggest that it is changed to "Unique nullable constraint may be disabled after compress or import operations" and that the details are moved to the sections below. typo: "may find that duplicates rows exist" - duplicates -> duplicate
        Hide
        Mamta A. Satoor added a comment -

        Attaching another update to release note based on the feedback from Knut.

        Show
        Mamta A. Satoor added a comment - Attaching another update to release note based on the feedback from Knut.
        Mamta A. Satoor made changes -
        Attachment releaseNote.html [ 12447876 ]
        Mamta A. Satoor made changes -
        Status Open [ 1 ] Resolved [ 5 ]
        Resolution Fixed [ 1 ]
        Hide
        Knut Anders Hatlen added a comment -

        Thanks, Mamta!

        Show
        Knut Anders Hatlen added a comment - Thanks, Mamta!
        Hide
        Lily Wei added a comment -

        Change the reference 10.4.2.1 from 10.4.x.x per Rick's suggestion for release note 10.6.2.0 (refer DERBY-4878).

        Show
        Lily Wei added a comment - Change the reference 10.4.2.1 from 10.4.x.x per Rick's suggestion for release note 10.6.2.0 (refer DERBY-4878 ).
        Lily Wei made changes -
        Attachment releaseNote.html [ 12454149 ]
        Kathey Marsden made changes -
        Fix Version/s 10.6.2.0 [ 12315342 ]
        Fix Version/s 10.6.1.1 [ 12314973 ]
        Knut Anders Hatlen made changes -
        Fix Version/s 10.6.2.1 [ 12315343 ]
        Fix Version/s 10.6.2.0 [ 12315342 ]
        Rick Hillegas made changes -
        Fix Version/s 10.7.1.1 [ 12315564 ]
        Fix Version/s 10.7.1.0 [ 12314971 ]
        Hide
        Knut Anders Hatlen added a comment -

        [bulk update] Close all resolved issues that haven't been updated for more than one year.

        Show
        Knut Anders Hatlen added a comment - [bulk update] Close all resolved issues that haven't been updated for more than one year.
        Knut Anders Hatlen made changes -
        Status Resolved [ 5 ] Closed [ 6 ]
        Gavin made changes -
        Workflow jira [ 12511694 ] Default workflow, editable Closed status [ 12802830 ]

          People

          • Assignee:
            Mamta A. Satoor
            Reporter:
            Brett Mason
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development