Derby
  1. Derby
  2. DERBY-5249

A table created with 10.0.2.1 with constraints cannot be dropped with 10.5 due to NullPointerException with insane build or ASSERT FAILED Failed to find sharable conglomerate descriptor for index conglomerate with sane build

    Details

    • Issue & fix info:
      High Value Fix, Workaround attached
    • Bug behavior facts:
      Regression

      Description

      In 10.0.2.1 there was some bug that caused a duplicate entry in sys.sysconglomerates.
      After running the attached repro_create.sql with 10.0.2.1, you will see two rows returned instead of one with:

      select c.constraintname, c.constraintid, cong.conglomerateid, cong.conglomeratename from sys.sysconglomerates cong, sys.syskeys k, sys.sysconstraints c where c.constraintname = 'PK_RS' and c.constraintid =k.constraintid and k.conglomerateid = cong.conglomerateid ;

      I am not sure what practical impact this has with 10.0 as you can still drop the table s.rs with that version.
      On connecting to the database with 10.5, either soft or hard upgrade with 10.5.3.2 - 1103924

      DROP TABLE S.RS fails with:
      Caused by: java.sql.SQLException: Java exception: 'ASSERT FAILED Failed to find
      sharable conglomerate descriptor for index conglomerate # 785: org.apache.derby.
      shared.common.sanity.AssertFailure'.
      at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExc
      eptionFactory.java:45)
      at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransport
      AcrossDRDA(SQLExceptionFactory40.java:119)
      at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLE
      xceptionFactory40.java:70)
      ... 17 more
      Caused by: org.apache.derby.shared.common.sanity.AssertFailure: ASSERT FAILED Fa
      iled to find sharable conglomerate descriptor for index conglomerate # 785
      at org.apache.derby.shared.common.sanity.SanityManager.THROWASSERT(Sanit
      yManager.java:162)
      at org.apache.derby.shared.common.sanity.SanityManager.THROWASSERT(Sanit
      yManager.java:147)
      at org.apache.derby.iapi.sql.dictionary.ConglomerateDescriptor.describeS
      haredConglomerate(ConglomerateDescriptor.java:638)
      at org.apache.derby.iapi.sql.dictionary.ConglomerateDescriptor.drop(Cong
      lomerateDescriptor.java:428)
      at org.apache.derby.iapi.sql.dictionary.ConstraintDescriptor.drop(Constr
      aintDescriptor.java:738)
      at org.apache.derby.impl.sql.execute.DDLSingleTableConstantAction.dropCo
      nstraint(DDLSingleTableConstantAction.java:144)
      at org.apache.derby.impl.sql.execute.DDLSingleTableConstantAction.dropCo
      nstraint(DDLSingleTableConstantAction.java:107)
      at org.apache.derby.impl.sql.execute.DropTableConstantAction.dropAllCons
      traintDescriptors(DropTableConstantAction.java:315)
      at org.apache.derby.impl.sql.execute.DropTableConstantAction.executeCons
      tantAction(DropTableConstantAction.java:222)
      at org.apache.derby.impl.sql.execute.MiscResultSet.open(MiscResultSet.ja
      va:61)
      at org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(Generi
      cPreparedStatement.java:416)
      at org.apache.derby.impl.sql.GenericPreparedStatement.execute(GenericPre
      paredStatement.java:297)
      at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedState
      ment.java:1235)
      ... 10 more

      and with an insane build with a NullPointerException:
      java.lang.NullPointerException
      at
      org.apache.derby.iapi.sql.dictionary.ConglomerateDescriptor.drop(Unknown
      Source)
      at
      org.apache.derby.iapi.sql.dictionary.ConstraintDescriptor.drop(Unknown
      Source)
      at
      org.apache.derby.impl.sql.execute.DDLSingleTableConstantAction.dropConst
      raint(Unknown Source)
      at
      org.apache.derby.impl.sql.execute.DDLSingleTableConstantAction.dropConst
      raint(Unknown Source)
      at
      org.apache.derby.impl.sql.execute.DropTableConstantAction.dropAllConstra
      intDescriptors(Unknown Source)
      at
      org.apache.derby.impl.sql.execute.DropTableConstantAction.executeConstan
      tAction(Unknown Source)
      at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown
      Source)
      at
      org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(Unknown
      Source)
      at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown
      Source)
      at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown
      Source)
      at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source)
      at org.apache.derby.impl.jdbc.EmbedStatement.executeUpdate(Unknown
      Source)

      Still need to figure out the exact versions affected, when the dup row was fixed, and when the drop stopped working.

      To reproduce connect to a database with 10.0.2.1
      (can be accessed at http://svn.apache.org/repos/asf/db/derby/jars/10.0.2.1)

      run the attached script repro_create.sql;

      connect with the latest on the trunk or 10.5 branch

      DROP TABLE S.RS;

      The table will not drop. The work around is to drop the table with the old version 10.0.2.1

      1. derby-5249_diff.txt
        3 kB
        Kathey Marsden
      2. my10db.zip
        45 kB
        Kathey Marsden
      3. repro_create.sql
        0.7 kB
        Kathey Marsden
      4. repro_create.sql
        1 kB
        Kathey Marsden

        Issue Links

          Activity

          Hide
          Kathey Marsden added a comment -

          Attaching reproduction. Run repro_create.sql with 10.0 and notice the extra row. Then connect with 10.5 and

          DROP TABLE S.RS;

          Show
          Kathey Marsden added a comment - Attaching reproduction. Run repro_create.sql with 10.0 and notice the extra row. Then connect with 10.5 and DROP TABLE S.RS;
          Hide
          Kathey Marsden added a comment -

          Here is the repro_create.sql with the unnecessary columns removed.

          Show
          Kathey Marsden added a comment - Here is the repro_create.sql with the unnecessary columns removed.
          Hide
          Kathey Marsden added a comment -

          I accidentally added this comment or something like it to DERBY-3299 by mistake. Moving it here.

          I think it makes sense to add in tests for both the extra rows and the drop problem to the upgrade tests to
          get an exact understanding of what versions are affected.

          But just as a data point, I noticed that the duplicate row is created with 10.1.1.0 but not the latest on the 10.1.
          branch, so likely a backported bug fix. 10.3.1.4 the earliest release of 10.3 didn't show the problem.
          I didn't do any manual checking of when the drop problem was introduced. Presumably any database that ever created problematic constraints may be affected by the drop issue, even if they had upgraded to intervening versions.

          Show
          Kathey Marsden added a comment - I accidentally added this comment or something like it to DERBY-3299 by mistake. Moving it here. I think it makes sense to add in tests for both the extra rows and the drop problem to the upgrade tests to get an exact understanding of what versions are affected. But just as a data point, I noticed that the duplicate row is created with 10.1.1.0 but not the latest on the 10.1. branch, so likely a backported bug fix. 10.3.1.4 the earliest release of 10.3 didn't show the problem. I didn't do any manual checking of when the drop problem was introduced. Presumably any database that ever created problematic constraints may be affected by the drop issue, even if they had upgraded to intervening versions.
          Hide
          Kathey Marsden added a comment -

          I am not 100% sure, but I think the original duplicate row on create problem fixed with DERBY-1854. I think the drop problem started with DERBY-3299 in 10.4.

          Show
          Kathey Marsden added a comment - I am not 100% sure, but I think the original duplicate row on create problem fixed with DERBY-1854 . I think the drop problem started with DERBY-3299 in 10.4.
          Hide
          Kathey Marsden added a comment -

          I am trying to understand the situation when we need to deal with the duplicate row
          from the 10.0 db. In the problematic database we see:

          select c.constraintname, c.constraintid, cong.conglomerateid, cong.conglomerate
          name, cong.conglomeratenumber from sys.sysconglomerates cong, sys.syskeys k, sy
          s.sysconstraints c where c.constraintname = 'PK_RS' and c.constraintid =k.constr
          aintid and k.conglomerateid = cong.conglomerateid ;
          CONSTRAINTNAME

          CONSTRAINTID
          CONGLOMERATEID CONGLOMERATENAME
          CONGLOMERATENUMBER
          --------------------------------------------------------------------------------
          --------------------------------------------------------------------------------
          --------------------------------------------------------------------------------
          --------------------------------------------------------------------------------
          --------------------------------
          PK_RS
          8ca44062-0130-4d54-e465-0000002
          87600
          848c0061-0130-4d54-e465-000000287600 SQL110601033139780
          785
          PK_RS
          8ca44062-0130-4d54-e465-0000002
          87600
          848c0061-0130-4d54-e465-000000287600 SQL110601033139890
          785

          and

          ij> select * from sys.sysconglomerates where conglomeratenumber =785;
          SCHEMAID |TABLEID |CONGLOMERATENUMBER |CONGLOMERATENAME

          ISIN& DESCRIPTOR ISCO& CONGLOMERATEID
          --------------------------------------------------------------------------------
          --------------------------------------------------------------------------------
          --------------------------------------------------------------------------------
          ------------------------------------------------
          1c16805c-0130-4d54-e465-000000287600
          2c44c05e-0130-4d54-e465-000000287600 785
          SQL110601033139780
          true UNIQUE BTREE (& true 848c0061-0130-4d54-e465-000000287600
          1c16805c-0130-4d54-e465-000000287600
          2c44c05e-0130-4d54-e465-000000287600 785
          SQL110601033139890
          true UNIQUE BTREE (& true 848c0061-0130-4d54-e465-000000287600

          The second entry for conglomerate number 785 appeared upon creation of the foreign key.
          I will attached the zipped database my10db.zip. I am thinking at a high level that describeSharedConglomerate should return this second conglomerate but am not totally sure.

          Show
          Kathey Marsden added a comment - I am trying to understand the situation when we need to deal with the duplicate row from the 10.0 db. In the problematic database we see: select c.constraintname, c.constraintid, cong.conglomerateid, cong.conglomerate name, cong.conglomeratenumber from sys.sysconglomerates cong, sys.syskeys k, sy s.sysconstraints c where c.constraintname = 'PK_RS' and c.constraintid =k.constr aintid and k.conglomerateid = cong.conglomerateid ; CONSTRAINTNAME CONSTRAINTID CONGLOMERATEID CONGLOMERATENAME CONGLOMERATENUMBER -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------------------- PK_RS 8ca44062-0130-4d54-e465-0000002 87600 848c0061-0130-4d54-e465-000000287600 SQL110601033139780 785 PK_RS 8ca44062-0130-4d54-e465-0000002 87600 848c0061-0130-4d54-e465-000000287600 SQL110601033139890 785 and ij> select * from sys.sysconglomerates where conglomeratenumber =785; SCHEMAID |TABLEID |CONGLOMERATENUMBER |CONGLOMERATENAME ISIN& DESCRIPTOR ISCO& CONGLOMERATEID -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ------------------------------------------------ 1c16805c-0130-4d54-e465-000000287600 2c44c05e-0130-4d54-e465-000000287600 785 SQL110601033139780 true UNIQUE BTREE (& true 848c0061-0130-4d54-e465-000000287600 1c16805c-0130-4d54-e465-000000287600 2c44c05e-0130-4d54-e465-000000287600 785 SQL110601033139890 true UNIQUE BTREE (& true 848c0061-0130-4d54-e465-000000287600 The second entry for conglomerate number 785 appeared upon creation of the foreign key. I will attached the zipped database my10db.zip. I am thinking at a high level that describeSharedConglomerate should return this second conglomerate but am not totally sure.
          Hide
          Kathey Marsden added a comment -

          10.0 database with problematic sys.sysconglomerates entry.

          Show
          Kathey Marsden added a comment - 10.0 database with problematic sys.sysconglomerates entry.
          Hide
          Mike Matrigali added a comment -

          I don't know this code so can't answer details but here is what I would do to debug this. From what you have
          posted so far I believe that what we are looking at is there was a bug in one or more very old releases that
          created bad system catalog entries. Some number of problems resulted from these entries but not sure what.
          A number releases after that bug was fixed could still drop the tables with the bad system catalogs. Then at
          some point a new bug was fixed but as a side effect could no longer deal with buggy system catalogs on drop.

          1) Determine exactly the releases that built the bad system catalog.
          2) determine exactly the releases that could drop the bad system catalog. Maybe looking at how drop worked
          before can help you understand what needs to be done now.
          3) You have posted the bad catalog entries, what is the correct catalog entries.
          4) Can you describe overall what is the problem that current drop is encountering - not just what you have
          already posted but something higher level. For instance did we already drop an index but didn't catch the
          extra row and now when we see the extra row it being an orphan is causing the problem.
          This is probably more clear once
          you post answer to 3. For instance do we just have bad rows in catalogs, or are there "extra" real files that
          need to be dealt with.
          5) The routine you mention changing has a name that seems like a utility routine that may be used for more than
          just drop. If so it may not be appropriate to make it handle buggy catalog entries. It would seem reasonable to
          limit the fix to just allowing a drop of the bad tables. Worst case maybe we have to handle this as a fix to
          catalogs on upgrade but that would be messy. It might be cleaner if we could somehow recognize the problem
          and possible fork off to a one off set of code.

          Show
          Mike Matrigali added a comment - I don't know this code so can't answer details but here is what I would do to debug this. From what you have posted so far I believe that what we are looking at is there was a bug in one or more very old releases that created bad system catalog entries. Some number of problems resulted from these entries but not sure what. A number releases after that bug was fixed could still drop the tables with the bad system catalogs. Then at some point a new bug was fixed but as a side effect could no longer deal with buggy system catalogs on drop. 1) Determine exactly the releases that built the bad system catalog. 2) determine exactly the releases that could drop the bad system catalog. Maybe looking at how drop worked before can help you understand what needs to be done now. 3) You have posted the bad catalog entries, what is the correct catalog entries. 4) Can you describe overall what is the problem that current drop is encountering - not just what you have already posted but something higher level. For instance did we already drop an index but didn't catch the extra row and now when we see the extra row it being an orphan is causing the problem. This is probably more clear once you post answer to 3. For instance do we just have bad rows in catalogs, or are there "extra" real files that need to be dealt with. 5) The routine you mention changing has a name that seems like a utility routine that may be used for more than just drop. If so it may not be appropriate to make it handle buggy catalog entries. It would seem reasonable to limit the fix to just allowing a drop of the bad tables. Worst case maybe we have to handle this as a fix to catalogs on upgrade but that would be messy. It might be cleaner if we could somehow recognize the problem and possible fork off to a one off set of code.
          Hide
          Kathey Marsden added a comment -

          Thank you Mike for the pointers. Here are some answers to some of your questions.

          1) Determine exactly the releases that built the bad system catalog.

          Apache Released Versions with bad catalogs:
          10.0.2.1
          10.1.1.0
          10.1.2.1

          and fixed in Apache Release
          10.1.3.1

          10.2 forward does not have the problem.
          The exact fix for the bad catalogs on the 10.1 branch was revision 411398.
          DERBY-655 getImportedKeys returns duplicate rows in some cases.
          I verified this by backing that fix out of 10.1.
          (Note DERBY-655 introduced a regression, DERBY-1854. Also note I was wrong in my initial theory that DERBY-1854 was what fixed the dup conglomerate, but it looks like while DERBY-655 corrected the dup it introduced some other bad catalog problem which would cause corruption on compress. DERBY-1854 went into the head of the 10.1 and 10.0 branches (never released). Both DERBY-655 and DERBY-1854 fexes were in 10.2.1.6.

          2) determine exactly the releases that could drop the bad system catalog. Maybe looking at how drop worked

          The drop error was introduced in Apache Release 10.4.1.3 with the fix for DERBY-3299 Uniqueness violation error (23505) occurs after dropping a PK constraint if there exists a foreign key on the same columns. This was a pretty extensive fix and had upgrade implications so was not backported, so all 10.4, 10.5, 10.6, 10.7, and 10.8 releases are affected by the drop problem, but lower branches are not.

          3) You have posted the bad catalog entries, what is the correct catalog entries.
          Here is an example with trunk and the repro_create script. This is actually surprising to me as there are still two entries in sys.sysconglomerates but the join query with sys.sysconstraints and sys.syskeys returns a single row. I think maybe the problem with the old one is that both say UNIQUE but I am not sure about that. I need to understand it better.

          ij> select c.constraintname, c.constraintid, cong.conglomerateid, cong.conglome
          ratename, cong.conglomeratenumber from sys.sysconglomerates cong, sys.syskeys k
          , sys.sysconstraints c where c.constraintname = 'PK_RS' and c.constraintid =k.co
          nstraintid and k.conglomerateid = cong.conglomerateid ;
          CONSTRAINTNAME

          CONSTRAINTID
          CONGLOMERATEID CONGLOMERATENAME
          CONGLOMERATENUMBER
          --------------------------------------------------------------------------------
          --------------------------------------------------------------------------------
          --------------------------------------------------------------------------------
          --------------------------------------------------------------------------------
          --------------------------------
          PK_RS
          e50d80a4-0130-524c-af38-0000001
          c6908
          94bc40a2-0130-524c-af38-0000001c6908 SQL110602144057310
          1153

          ij> select * from sys.sysconglomerates where conglomeratenumber=1153;
          SCHEMAID |TABLEID |CONGLO
          MERATENUMBER |CONGLOMERATENAME

          ISIN& DESCRIPTOR
          ISCO& CONGLOMERATEID
          --------------------------------------------------------------------------------
          --------------------------------------------------------------------------------
          --------------------------------------------------------------------------------
          ------------------------------------------------
          23ce809c-0130-524c-af38-0000001c6908
          6c44409f-0130-524c-af38-0000001c6908 1153
          SQL110602144057310
          true UNIQUE BTR
          EE (&
          true 94bc40a2-0130-524c-af38-0000001c6908
          23ce809c-0130-524c-af38-0000001c6908
          6c44409f-0130-524c-af38-0000001c6908 1153
          SQL110602144057610
          true BTREE (1)
          true 070a00b0-0130-524c-af38-0000001c6908

          2 rows selected

          for 4 and 5 I am going to do some more debugging and also try to understand what is really wrong with the old catalogs. Any insight appreciated.

          Show
          Kathey Marsden added a comment - Thank you Mike for the pointers. Here are some answers to some of your questions. 1) Determine exactly the releases that built the bad system catalog. Apache Released Versions with bad catalogs: 10.0.2.1 10.1.1.0 10.1.2.1 and fixed in Apache Release 10.1.3.1 10.2 forward does not have the problem. The exact fix for the bad catalogs on the 10.1 branch was revision 411398. DERBY-655 getImportedKeys returns duplicate rows in some cases. I verified this by backing that fix out of 10.1. (Note DERBY-655 introduced a regression, DERBY-1854 . Also note I was wrong in my initial theory that DERBY-1854 was what fixed the dup conglomerate, but it looks like while DERBY-655 corrected the dup it introduced some other bad catalog problem which would cause corruption on compress. DERBY-1854 went into the head of the 10.1 and 10.0 branches (never released). Both DERBY-655 and DERBY-1854 fexes were in 10.2.1.6. 2) determine exactly the releases that could drop the bad system catalog. Maybe looking at how drop worked The drop error was introduced in Apache Release 10.4.1.3 with the fix for DERBY-3299 Uniqueness violation error (23505) occurs after dropping a PK constraint if there exists a foreign key on the same columns. This was a pretty extensive fix and had upgrade implications so was not backported, so all 10.4, 10.5, 10.6, 10.7, and 10.8 releases are affected by the drop problem, but lower branches are not. 3) You have posted the bad catalog entries, what is the correct catalog entries. Here is an example with trunk and the repro_create script. This is actually surprising to me as there are still two entries in sys.sysconglomerates but the join query with sys.sysconstraints and sys.syskeys returns a single row. I think maybe the problem with the old one is that both say UNIQUE but I am not sure about that. I need to understand it better. ij> select c.constraintname, c.constraintid, cong.conglomerateid, cong.conglome ratename, cong.conglomeratenumber from sys.sysconglomerates cong, sys.syskeys k , sys.sysconstraints c where c.constraintname = 'PK_RS' and c.constraintid =k.co nstraintid and k.conglomerateid = cong.conglomerateid ; CONSTRAINTNAME CONSTRAINTID CONGLOMERATEID CONGLOMERATENAME CONGLOMERATENUMBER -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------------------- PK_RS e50d80a4-0130-524c-af38-0000001 c6908 94bc40a2-0130-524c-af38-0000001c6908 SQL110602144057310 1153 ij> select * from sys.sysconglomerates where conglomeratenumber=1153; SCHEMAID |TABLEID |CONGLO MERATENUMBER |CONGLOMERATENAME ISIN& DESCRIPTOR ISCO& CONGLOMERATEID -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ------------------------------------------------ 23ce809c-0130-524c-af38-0000001c6908 6c44409f-0130-524c-af38-0000001c6908 1153 SQL110602144057310 true UNIQUE BTR EE (& true 94bc40a2-0130-524c-af38-0000001c6908 23ce809c-0130-524c-af38-0000001c6908 6c44409f-0130-524c-af38-0000001c6908 1153 SQL110602144057610 true BTREE (1) true 070a00b0-0130-524c-af38-0000001c6908 2 rows selected for 4 and 5 I am going to do some more debugging and also try to understand what is really wrong with the old catalogs. Any insight appreciated.
          Hide
          Kathey Marsden added a comment -

          So in the bad catalogs, the conglomerateid for the two rows is the same. This is what DERBY-655 fixed. In the current code the method:
          public ConglomerateDescriptor describeSharedConglomerate(
          ConglomerateDescriptor [] descriptors, boolean ignoreThis)

          if ignoreThis is set, has the code below to determine whether we are looking at "this" as we loop through the two descriptors passed in

          // Skip if ignoreThis is true and it describes "this".
          if (ignoreThis &&
          getUUID().equals(descriptors[i].getUUID()))

          { continue; }

          With the bad catalogs the getUUID() matches for both entries so we continue and do not return the shared descriptor. A quick hack to return the second match allows the table to be dropped, but I am not sure that is the right fix or if I might actually return "this" in some instances by doing that. describeSharedConglomerate is currently only used by drop.

          Show
          Kathey Marsden added a comment - So in the bad catalogs, the conglomerateid for the two rows is the same. This is what DERBY-655 fixed. In the current code the method: public ConglomerateDescriptor describeSharedConglomerate( ConglomerateDescriptor [] descriptors, boolean ignoreThis) if ignoreThis is set, has the code below to determine whether we are looking at "this" as we loop through the two descriptors passed in // Skip if ignoreThis is true and it describes "this". if (ignoreThis && getUUID().equals(descriptors [i] .getUUID())) { continue; } With the bad catalogs the getUUID() matches for both entries so we continue and do not return the shared descriptor. A quick hack to return the second match allows the table to be dropped, but I am not sure that is the right fix or if I might actually return "this" in some instances by doing that. describeSharedConglomerate is currently only used by drop.
          Hide
          Kathey Marsden added a comment -

          Linking relevant issues.
          There is an interesting conversation regarding the duplicates in DERBY-1343. Dan said:

          "... both ConglomerateDescriptors (rows) have the same key information and so it actually doesn't matter which gets dropped. The only difference is the conglomerate name for the backing index which is never used. As far as I can see the ConstraintDescriptor only links to the ConglomerateDescriptor through the UUID (which is the same in the two rows in the 10.0 code before the fix to DERBY-655).

          So I don't see any real need to write upgrade code that handles this situation. I will add some comments to the code to clarify it."

          Perhaps it does not matter which descriptor gets returned from describeSharedConglomerate either, but I will look more closely tomorrow.

          Show
          Kathey Marsden added a comment - Linking relevant issues. There is an interesting conversation regarding the duplicates in DERBY-1343 . Dan said: "... both ConglomerateDescriptors (rows) have the same key information and so it actually doesn't matter which gets dropped. The only difference is the conglomerate name for the backing index which is never used. As far as I can see the ConstraintDescriptor only links to the ConglomerateDescriptor through the UUID (which is the same in the two rows in the 10.0 code before the fix to DERBY-655 ). So I don't see any real need to write upgrade code that handles this situation. I will add some comments to the code to clarify it." Perhaps it does not matter which descriptor gets returned from describeSharedConglomerate either, but I will look more closely tomorrow.
          Hide
          Kathey Marsden added a comment -

          Attaching a patch for this issue derby-5249_diff.txt.
          The change is to change the check when ignoreThis is set to compare not only getUUID() but also getConglomerateName(). That way we can handle the pre DERBY-655 cases where the UUID might be the same.

          Also in the patch are some fixes needed to the test that showed up after the the test progressed further.

          Running tests now.

          Show
          Kathey Marsden added a comment - Attaching a patch for this issue derby-5249_diff.txt. The change is to change the check when ignoreThis is set to compare not only getUUID() but also getConglomerateName(). That way we can handle the pre DERBY-655 cases where the UUID might be the same. Also in the patch are some fixes needed to the test that showed up after the the test progressed further. Running tests now.
          Hide
          Mike Matrigali added a comment -

          Given the info you tracked down posted already and after reviewing the posted patch, the patch looks good to me.

          Show
          Mike Matrigali added a comment - Given the info you tracked down posted already and after reviewing the posted patch, the patch looks good to me.
          Hide
          Kathey Marsden added a comment -

          Running regression tests on 10.8 I hit DERBY-5263 but went ahead and checked in as it did not seem related to this change.
          Running regression tests on 10.7, I hit DERBY-5119 and DERBY-4540. These were also seen in the previous tinderbox.
          http://dbtg.foundry.sun.com/derby/test/tinderbox_10.7_16/jvm1.6/testing/Limited/testSummary-1131298.html

          Show
          Kathey Marsden added a comment - Running regression tests on 10.8 I hit DERBY-5263 but went ahead and checked in as it did not seem related to this change. Running regression tests on 10.7, I hit DERBY-5119 and DERBY-4540 . These were also seen in the previous tinderbox. http://dbtg.foundry.sun.com/derby/test/tinderbox_10.7_16/jvm1.6/testing/Limited/testSummary-1131298.html
          Hide
          Knut Anders Hatlen added a comment -

          [bulk update] Close all resolved issues that haven't been updated for more than one year.

          Show
          Knut Anders Hatlen added a comment - [bulk update] Close all resolved issues that haven't been updated for more than one year.

            People

            • Assignee:
              Kathey Marsden
              Reporter:
              Kathey Marsden
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development