Derby
  1. Derby
  2. DERBY-5120

Row from SYSDEPENDS gets deleted when a table has update triggers defined on it and an upate is made to the table

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 10.2.2.0, 10.8.1.2
    • Fix Version/s: 10.8.2.2, 10.9.1.0
    • Component/s: SQL
    • Labels:
      None

      Description

      I have an ij script below which shows that the number of rows in SYSDEPENDS go down by 1 for the following test case after an update is made to a table with update triggers defined on it. Am not sure what kind of problems the missing dependnecy might cause.

      connect 'jdbc:derby:c:/dellater/db1;create=true';
      CREATE TABLE ATDC_13_TAB1(c11 int, c12 int);
      insert into ATDC_13_TAB1 values (1,11);

      create table ATDC_13_TAB2(c21 int, c22 int);
      insert into ATDC_13_TAB2 values (1,11);

      create table ATDC_13_TAB3(c31 int, c32 int);
      insert into ATDC_13_TAB3 values (1,11);

      create table ATDC_13_TAB1_backup(c11 int, c12 int);
      insert into ATDC_13_TAB1_backup values (1,11);

      create trigger ATDC_13_TAB1_trigger_1 after update
      on ATDC_13_TAB1 for each row mode db2sql
      INSERT INTO ATDC_13_TAB1_BACKUP(C11)
      SELECT C21 from ATDC_13_TAB2;

      create trigger ATDC_13_TAB1_trigger_2 after update
      on ATDC_13_TAB1 for each row mode db2sql
      INSERT INTO ATDC_13_TAB1_BACKUP
      SELECT C31, C32 from ATDC_13_TAB3;

      – following shows 14 rows
      select * from sys.sysdepends;
      update ATDC_13_TAB1 set c12=11;
      – following shows only 13 rows

      I tried this on 10.2 and 10.8 and saw the same behavior on both. It seems like the dependency that gets dropped is between the stored prepared statement and a table. Have not spent enough time to find out more details but I thought it is worth pointing out the behavior
      select * from sys.sysdepends;

      1. DERBY5120_patch1_diff.txt
        1 kB
        Mamta A. Satoor
      2. DERBY5120_patch1_stat.txt
        0.2 kB
        Mamta A. Satoor
      3. DERBY5120_patch2_diff.txt
        3 kB
        Mamta A. Satoor
      4. DERBY5120_patch2_stat.txt
        0.3 kB
        Mamta A. Satoor
      5. DERBY5120_patch3_diff.txt
        7 kB
        Mamta A. Satoor
      6. DERBY5120_patch3_stat.txt
        0.4 kB
        Mamta A. Satoor
      7. DERBY5120_patch4_diff.txt
        14 kB
        Mamta A. Satoor
      8. DERBY5120_patch4_stat.txt
        0.5 kB
        Mamta A. Satoor
      9. DERBY5120_patch5_diff.txt
        15 kB
        Mamta A. Satoor
      10. DERBY5120_patch5_stat.txt
        0.5 kB
        Mamta A. Satoor

        Issue Links

          Activity

          Hide
          Mamta A. Satoor added a comment -

          I much simpler test case for this jira is as follows

          connect 'jdbc:derby:db1_2;create=true';
          CREATE TABLE ATDC_13_TAB1(c11 int, c12 int);
          insert into ATDC_13_TAB1 values (1,11);

          create trigger ATDC_13_TAB1_trigger_1 after update
          on ATDC_13_TAB1 for each row mode db2sql
          values(1);

          create trigger ATDC_13_TAB1_trigger_2 after update
          on ATDC_13_TAB1 for each row mode db2sql
          values(1,2);

          – following shows 6 rows
          select * from sys.sysdepends;
          update ATDC_13_TAB1 set c12=11;
          – following shows only 5 rows
          select * from sys.sysdepends;

          Show
          Mamta A. Satoor added a comment - I much simpler test case for this jira is as follows connect 'jdbc:derby:db1_2;create=true'; CREATE TABLE ATDC_13_TAB1(c11 int, c12 int); insert into ATDC_13_TAB1 values (1,11); create trigger ATDC_13_TAB1_trigger_1 after update on ATDC_13_TAB1 for each row mode db2sql values(1); create trigger ATDC_13_TAB1_trigger_2 after update on ATDC_13_TAB1 for each row mode db2sql values(1,2); – following shows 6 rows select * from sys.sysdepends; update ATDC_13_TAB1 set c12=11; – following shows only 5 rows select * from sys.sysdepends;
          Hide
          Mamta A. Satoor added a comment -

          A typical CREATE TRIGGER goes through following steps as far as adding/deleting rows from SYSDEPENDS tale
          1)Any time a trigger is created, CreateTriggerConstantAction.executeConstantAction() sends CREATE_TRIGGER invalidation to the trigger table as shown below(The list of objects getting invalidated will include existing triggers defined on the trigger table).
          /*

            • Send an invalidate on the table from which
            • the triggering event emanates. This it
            • to make sure that DML statements on this table
            • will be recompiled. Do this before we create
            • our trigger spses lest we invalidate them just
            • after creating them.
              */
              dm.invalidateFor(triggerTable, DependencyManager.CREATE_TRIGGER, lcc);

          2)Next, CreateTriggerConstantAction.executeConstantAction() does the trigger action sps generation
          /*

            • Create the trigger action
              */
              actionspsd = createSPS(lcc, ddg, dd, tc, tmpTriggerId, triggerSd,
              actionSPSId, spsCompSchemaId, actionText, false, triggerTable);

          3) During trigger action sps generation, SPSDescriptor.compileStatement removes the existing dependencies of trigger action sps in sysdepends as shown below
          (for a trigger getting created the first time, there will be no SPS dependencies for the trigger action SPS. The same code is called when a trigger is found invalid, in that case, there will be existing trigger action SPS dependencies which will get dropped here)
          /*

            • Clear out all the dependencies that exist
            • before we recreate them so we don't grow
            • SYS.SYSDEPENDS forever.
              */
              dm.clearDependencies(lcc, this, tc);
              4)After clearing out existing dependencies, it adds the dependencies that it finds during this compile SPSDescriptor.compileStatement()
              /*
            • Copy over all the dependencies to me
              */
              dm.copyDependencies(preparedStatement, // from
              this, // to
              false, // persistent only
              cm,
              tc);
              5)After finishing with trigger action SPS generation, CreateTriggerConstantAction.executeConstantAction adds the depdencies for the trigger descriptor on trigger action sps and on trigger table. Additionally, it adds depedency on trigger action sps on trigger table(this is the dependency which later gets dropped when a compile of trigger action sps had cleared existing trigger action sps dependencies before regenerating the trigger action sps. The trigger action sps regeneration
              does not add the dependency between trigger action sps and trigger table
              dm.addDependency(triggerd, actionspsd, lcc.getContextManager());
              dm.addDependency(triggerd, triggerTable, lcc.getContextManager());
              dm.addDependency(actionspsd, triggerTable, lcc.getContextManager());

          Following the steps above (to find what rows get added into SYSDEPENDS)for the first trigger in the simpler example that I posted on 01/Jul/11
          create trigger ATDC_13_TAB1_trigger_1 after update
          on ATDC_13_TAB1 for each row mode db2sql
          values(1);
          Step 4) will not find any dependencies for trigger action values(1);
          Step 5) will add three rows into SYSDEPENDS, namely
          a)dependency between triiger descriptor for ATDC_13_TAB1_trigger_1 and triiger action SPS
          b)dependency between trigger descriptor for ATDC_13_TAB1_trigger_1 and triiger table ATDC_13_TAB1
          3)dependency between triiger action SPS and triiger table ATDC_13_TAB1

          This is how, we end up with three rows for the first trigger ATDC_13_TAB1_trigger_1

          When the 2nd trigger(ATDC_13_TAB1_trigger_2) is created, it also results into adding 3 rows into SYSDEPENDS but additionally in step 1), it invalidates the existing trigger ATDC_13_TAB1_trigger_1

          Show
          Mamta A. Satoor added a comment - A typical CREATE TRIGGER goes through following steps as far as adding/deleting rows from SYSDEPENDS tale 1)Any time a trigger is created, CreateTriggerConstantAction.executeConstantAction() sends CREATE_TRIGGER invalidation to the trigger table as shown below(The list of objects getting invalidated will include existing triggers defined on the trigger table). /* Send an invalidate on the table from which the triggering event emanates. This it to make sure that DML statements on this table will be recompiled. Do this before we create our trigger spses lest we invalidate them just after creating them. */ dm.invalidateFor(triggerTable, DependencyManager.CREATE_TRIGGER, lcc); 2)Next, CreateTriggerConstantAction.executeConstantAction() does the trigger action sps generation /* Create the trigger action */ actionspsd = createSPS(lcc, ddg, dd, tc, tmpTriggerId, triggerSd, actionSPSId, spsCompSchemaId, actionText, false, triggerTable); 3) During trigger action sps generation, SPSDescriptor.compileStatement removes the existing dependencies of trigger action sps in sysdepends as shown below (for a trigger getting created the first time, there will be no SPS dependencies for the trigger action SPS. The same code is called when a trigger is found invalid, in that case, there will be existing trigger action SPS dependencies which will get dropped here) /* Clear out all the dependencies that exist before we recreate them so we don't grow SYS.SYSDEPENDS forever. */ dm.clearDependencies(lcc, this, tc); 4)After clearing out existing dependencies, it adds the dependencies that it finds during this compile SPSDescriptor.compileStatement() /* Copy over all the dependencies to me */ dm.copyDependencies(preparedStatement, // from this, // to false, // persistent only cm, tc); 5)After finishing with trigger action SPS generation, CreateTriggerConstantAction.executeConstantAction adds the depdencies for the trigger descriptor on trigger action sps and on trigger table. Additionally, it adds depedency on trigger action sps on trigger table(this is the dependency which later gets dropped when a compile of trigger action sps had cleared existing trigger action sps dependencies before regenerating the trigger action sps. The trigger action sps regeneration does not add the dependency between trigger action sps and trigger table dm.addDependency(triggerd, actionspsd, lcc.getContextManager()); dm.addDependency(triggerd, triggerTable, lcc.getContextManager()); dm.addDependency(actionspsd, triggerTable, lcc.getContextManager()); Following the steps above (to find what rows get added into SYSDEPENDS)for the first trigger in the simpler example that I posted on 01/Jul/11 create trigger ATDC_13_TAB1_trigger_1 after update on ATDC_13_TAB1 for each row mode db2sql values(1); Step 4) will not find any dependencies for trigger action values(1); Step 5) will add three rows into SYSDEPENDS, namely a)dependency between triiger descriptor for ATDC_13_TAB1_trigger_1 and triiger action SPS b)dependency between trigger descriptor for ATDC_13_TAB1_trigger_1 and triiger table ATDC_13_TAB1 3)dependency between triiger action SPS and triiger table ATDC_13_TAB1 This is how, we end up with three rows for the first trigger ATDC_13_TAB1_trigger_1 When the 2nd trigger(ATDC_13_TAB1_trigger_2) is created, it also results into adding 3 rows into SYSDEPENDS but additionally in step 1), it invalidates the existing trigger ATDC_13_TAB1_trigger_1
          Hide
          Dag H. Wanvik added a comment -

          Hi Mamta, I am confused by this code to be honest. For example, the
          following fragment:

          /*

            • Make underlying spses dependent on the trigger.
              */
              if (whenspsd != null) { dm.addDependency(triggerd, whenspsd, lcc.getContextManager()); }

              dm.addDependency(triggerd, actionspsd, lcc.getContextManager());
              dm.addDependency(triggerd, triggerTable, lcc.getContextManager());
              dm.addDependency(actionspsd, triggerTable, lcc.getContextManager());

          As far as I understand, the first addDependency makes "triggerd"
          dependent on "whenspsd", but the comment says its the other way
          around.

          Similarly, the second add makes "triggerd" dependent on
          "actionspsd". I would have throught it should be the other way around?
          That the stored sps should dependent on the trigger definition as as
          to get recompiled when the trigger definition changed..

          The third and the fourth adds make sense to me: when the table
          changes, the trigger definition would need recompilation and when the
          triggerTable changed the actionspsd would need recompilation.

          But I may be missing something here...

          Show
          Dag H. Wanvik added a comment - Hi Mamta, I am confused by this code to be honest. For example, the following fragment: /* Make underlying spses dependent on the trigger. */ if (whenspsd != null) { dm.addDependency(triggerd, whenspsd, lcc.getContextManager()); } dm.addDependency(triggerd, actionspsd, lcc.getContextManager()); dm.addDependency(triggerd, triggerTable, lcc.getContextManager()); dm.addDependency(actionspsd, triggerTable, lcc.getContextManager()); As far as I understand, the first addDependency makes "triggerd" dependent on "whenspsd", but the comment says its the other way around. Similarly, the second add makes "triggerd" dependent on "actionspsd". I would have throught it should be the other way around? That the stored sps should dependent on the trigger definition as as to get recompiled when the trigger definition changed.. The third and the fourth adds make sense to me: when the table changes, the trigger definition would need recompilation and when the triggerTable changed the actionspsd would need recompilation. But I may be missing something here...
          Hide
          Mamta A. Satoor added a comment -

          Dag, I tend to agree with your comments. The cooment in the code does not seem to agree with the kinds of dependencies added by first 2 addDepency statements. I tried looking through the history of this code and it appears that we have this code from the very beginning.

          But, I debugged further to find out why one of the rows added during CREATE TRIGGER time gets dropped later on. The issue seems to be around trigger action sps getting invalidated and later getting recompiled when it gets fired next time around. Going back to the example test case provided earlier

          connect 'jdbc:derby:db1_2;create=true';
          CREATE TABLE ATDC_13_TAB1(c11 int, c12 int);
          insert into ATDC_13_TAB1 values (1,11);

          create trigger ATDC_13_TAB1_trigger_1 after update
          on ATDC_13_TAB1 for each row mode db2sql
          values(1);

          create trigger ATDC_13_TAB1_trigger_2 after update
          on ATDC_13_TAB1 for each row mode db2sql
          values(1,2);

          – following shows 6 rows
          select * from sys.sysdepends;
          update ATDC_13_TAB1 set c12=11;
          – following shows only 5 rows
          select * from sys.sysdepends;

          When the 2nd trigger(ATDC_13_TAB1_trigger_2) gets created, CreateTriggerConstantAction sends CREATE_TRIGGER invalidation to the trigger table as shown below(This sends invalidation code to the first trigger ATDC_13_TAB1_trigger_1)
          /*

            • Send an invalidate on the table from which
            • the triggering event emanates. This it
            • to make sure that DML statements on this table
            • will be recompiled. Do this before we create
            • our trigger spses lest we invalidate them just
            • after creating them.
              */
              dm.invalidateFor(triggerTable, DependencyManager.CREATE_TRIGGER, lcc);

          Later when, "update ATDC_13_TAB1 set c12=11" statement is executed, it fires both the triggers. The first trigger
          (ATDC_13_TAB1_trigger_1) during it's execution finds that it is invalid. During recompilation, SPSDescriptor.compileStatement
          removes the existing dependencies recorded in SYSDEPENDS table for the trigger action sps as shown below.This step ends up
          removing the dependency recorded between the trigger action sps and trigger table(added by CreateTriggerConstantAction at the time trigger creation) and that is where we can one row short.
          /*

            • Clear out all the dependencies that exist
            • before we recreate them so we don't grow
            • SYS.SYSDEPENDS forever.
              */
              dm.clearDependencies(lcc, this, tc);
              After clearing out existing dependencies for invalid trigger action sps, the trigger action sps regeneration process adds the dependencies that it finds during this recompile SPSDescriptor.compileStatement()
              /*
            • Copy over all the dependencies to me
              */
              dm.copyDependencies(preparedStatement, // from
              this, // to
              false, // persistent only
              cm,
              tc);
              But the dependency between trigger action SPSD and trigger table never gets added back during the recompilation and that is where we loose a row from SYSDEPENDS.
          Show
          Mamta A. Satoor added a comment - Dag, I tend to agree with your comments. The cooment in the code does not seem to agree with the kinds of dependencies added by first 2 addDepency statements. I tried looking through the history of this code and it appears that we have this code from the very beginning. But, I debugged further to find out why one of the rows added during CREATE TRIGGER time gets dropped later on. The issue seems to be around trigger action sps getting invalidated and later getting recompiled when it gets fired next time around. Going back to the example test case provided earlier connect 'jdbc:derby:db1_2;create=true'; CREATE TABLE ATDC_13_TAB1(c11 int, c12 int); insert into ATDC_13_TAB1 values (1,11); create trigger ATDC_13_TAB1_trigger_1 after update on ATDC_13_TAB1 for each row mode db2sql values(1); create trigger ATDC_13_TAB1_trigger_2 after update on ATDC_13_TAB1 for each row mode db2sql values(1,2); – following shows 6 rows select * from sys.sysdepends; update ATDC_13_TAB1 set c12=11; – following shows only 5 rows select * from sys.sysdepends; When the 2nd trigger(ATDC_13_TAB1_trigger_2) gets created, CreateTriggerConstantAction sends CREATE_TRIGGER invalidation to the trigger table as shown below(This sends invalidation code to the first trigger ATDC_13_TAB1_trigger_1) /* Send an invalidate on the table from which the triggering event emanates. This it to make sure that DML statements on this table will be recompiled. Do this before we create our trigger spses lest we invalidate them just after creating them. */ dm.invalidateFor(triggerTable, DependencyManager.CREATE_TRIGGER, lcc); Later when, "update ATDC_13_TAB1 set c12=11" statement is executed, it fires both the triggers. The first trigger (ATDC_13_TAB1_trigger_1) during it's execution finds that it is invalid. During recompilation, SPSDescriptor.compileStatement removes the existing dependencies recorded in SYSDEPENDS table for the trigger action sps as shown below.This step ends up removing the dependency recorded between the trigger action sps and trigger table(added by CreateTriggerConstantAction at the time trigger creation) and that is where we can one row short. /* Clear out all the dependencies that exist before we recreate them so we don't grow SYS.SYSDEPENDS forever. */ dm.clearDependencies(lcc, this, tc); After clearing out existing dependencies for invalid trigger action sps, the trigger action sps regeneration process adds the dependencies that it finds during this recompile SPSDescriptor.compileStatement() /* Copy over all the dependencies to me */ dm.copyDependencies(preparedStatement, // from this, // to false, // persistent only cm, tc); But the dependency between trigger action SPSD and trigger table never gets added back during the recompilation and that is where we loose a row from SYSDEPENDS.
          Hide
          Dag H. Wanvik added a comment -

          I guess the trigger action statement here ("values(1)") isn't really dependent on either the table or the trigger definition for that matter? So maybe it doesn't matter. Maybe it was gratuitous in the first place? Maybe the table dependency should always (only) be added by the compilation step of the SPS..? The comment ("SIDE EFFECTS") in SPSDescriptor#prepareAndRelease seems to indicate it would happen there..

          It seems weird that adding a second trigger should invalidate the first trigger's SPS, though, but as far as I can see it's a side effect of forcing DML statements to be recompiled when adding the second trigger. It would seems this could set of a ping-pong of invalidation between triggers, but I guess it doesn't, since on invalidation of the first trigger's SPS we just recompile and do not re-execute the code in CreateTriggerConstantAction (which would otherwise set off the CREATE_TRIGGER invalidation on the trigger table and hence recompile of the second etc)...

          The whole idea of invalidation is to keep things up to date, maybe we should revisit what dependencies are really needed for triggers, right now it's not clear to me..

          Show
          Dag H. Wanvik added a comment - I guess the trigger action statement here ("values(1)") isn't really dependent on either the table or the trigger definition for that matter? So maybe it doesn't matter. Maybe it was gratuitous in the first place? Maybe the table dependency should always (only) be added by the compilation step of the SPS..? The comment ("SIDE EFFECTS") in SPSDescriptor#prepareAndRelease seems to indicate it would happen there.. It seems weird that adding a second trigger should invalidate the first trigger's SPS, though, but as far as I can see it's a side effect of forcing DML statements to be recompiled when adding the second trigger. It would seems this could set of a ping-pong of invalidation between triggers, but I guess it doesn't, since on invalidation of the first trigger's SPS we just recompile and do not re-execute the code in CreateTriggerConstantAction (which would otherwise set off the CREATE_TRIGGER invalidation on the trigger table and hence recompile of the second etc)... The whole idea of invalidation is to keep things up to date, maybe we should revisit what dependencies are really needed for triggers, right now it's not clear to me..
          Hide
          Mamta A. Satoor added a comment -

          I have a test case(with comments in the script below) that shows how missing row in SYSDEPENDS does not catch a trigger getting invalid.

          connect 'jdbc:derby:db1;create=true';
          create table t1(c11 int, c12 int);
          insert into t1 values(1,1);
          create table t1_bkup1(c111 int, c112 int);
          create table t1_bkup2(c211 int, c212 int);

          – for following trigger, 5 rows will be added to sysdepends
          create trigger tr1 after update of c11 on t1 referencing
          old_table as old for each statement insert into
          t1_bkup1 select * from old;

          – for following trigger, 5 rows will be added to sysdepends
          – This trigger creation will also invalidate the trigger tr1
          create trigger tr2 after update of c11 on t1 referencing
          old as oldrow for each row insert into
          t1_bkup2(c211) values (oldrow.c11);
          select count from sys.sysdepends;

          – following update will cause tr1 to recompile because it has been marked invalid
          – But as part of recompile, trigger action sps's dependency on trigger table will
          – be lost bringing down the total number of rows in sysdepends from 10 to 9
          update t1 set c11=2;
          select count from sys.sysdepends;

          – Because the dependency between trigger tr1's trigger action sps and trigger table
          – is lost, following alter table on trigger table will not be noticed by tr1
          alter table t1 add column c113 int;
          – following update will not fail for trigger tr1 even though it is not correct anymore(tr1
          – is inserting into t1_bkup1 but after the alter table the number of columns in
          – trigger table do not match the number of columns in t1_bkup1 but this is never
          – caught because of the missing dependency.)
          update t1 set c11=3;

          – We can prove that trigger tr1 is incorrect after alter table because if we tried
          – creating another trigger with same definition as tr1, it will fail
          create trigger tr3 after update of c11 on t1 referencing
          old_table as old for each statement insert into
          t1_bkup1 select * from old;

          Show
          Mamta A. Satoor added a comment - I have a test case(with comments in the script below) that shows how missing row in SYSDEPENDS does not catch a trigger getting invalid. connect 'jdbc:derby:db1;create=true'; create table t1(c11 int, c12 int); insert into t1 values(1,1); create table t1_bkup1(c111 int, c112 int); create table t1_bkup2(c211 int, c212 int); – for following trigger, 5 rows will be added to sysdepends create trigger tr1 after update of c11 on t1 referencing old_table as old for each statement insert into t1_bkup1 select * from old; – for following trigger, 5 rows will be added to sysdepends – This trigger creation will also invalidate the trigger tr1 create trigger tr2 after update of c11 on t1 referencing old as oldrow for each row insert into t1_bkup2(c211) values (oldrow.c11); select count from sys.sysdepends; – following update will cause tr1 to recompile because it has been marked invalid – But as part of recompile, trigger action sps's dependency on trigger table will – be lost bringing down the total number of rows in sysdepends from 10 to 9 update t1 set c11=2; select count from sys.sysdepends; – Because the dependency between trigger tr1's trigger action sps and trigger table – is lost, following alter table on trigger table will not be noticed by tr1 alter table t1 add column c113 int; – following update will not fail for trigger tr1 even though it is not correct anymore(tr1 – is inserting into t1_bkup1 but after the alter table the number of columns in – trigger table do not match the number of columns in t1_bkup1 but this is never – caught because of the missing dependency.) update t1 set c11=3; – We can prove that trigger tr1 is incorrect after alter table because if we tried – creating another trigger with same definition as tr1, it will fail create trigger tr3 after update of c11 on t1 referencing old_table as old for each statement insert into t1_bkup1 select * from old;
          Hide
          Dag H. Wanvik added a comment -

          Interesting. Why isn't the dependency on the trigger table (bound as old, oldrow) added during (re)prepare of the action statement I wonder? If it were, the dependency would be added back

          Show
          Dag H. Wanvik added a comment - Interesting. Why isn't the dependency on the trigger table (bound as old, oldrow) added during (re)prepare of the action statement I wonder? If it were, the dependency would be added back
          Hide
          Dag H. Wanvik added a comment -

          What would be the correct behavior here? Obviously, tr1 is no longer valid after the alter table: should tr1 be silently dropped or should the alter table fail? We don't have a CASCADE/RESTRICT specification for ADD COLUMN, perhaps this proves we need it?

          Show
          Dag H. Wanvik added a comment - What would be the correct behavior here? Obviously, tr1 is no longer valid after the alter table: should tr1 be silently dropped or should the alter table fail? We don't have a CASCADE/RESTRICT specification for ADD COLUMN, perhaps this proves we need it?
          Hide
          Mamta A. Satoor added a comment -

          If the dependency between trigger action sps and trigger table wasn't dropped, I think then update sql would fail because of trigger tr1 finding out during recompile that the it is not valid anymore.

          As for ALTER TABLE drop column catching the dependency, there are couple jiras open for catching cases like this I think.

          I am looking at how the dependency between trigger action sps and trigger table can be maintained during the recompile of trigger action.

          Show
          Mamta A. Satoor added a comment - If the dependency between trigger action sps and trigger table wasn't dropped, I think then update sql would fail because of trigger tr1 finding out during recompile that the it is not valid anymore. As for ALTER TABLE drop column catching the dependency, there are couple jiras open for catching cases like this I think. I am looking at how the dependency between trigger action sps and trigger table can be maintained during the recompile of trigger action.
          Hide
          Rick Hillegas added a comment -

          The following theory and example may help people reason about how to fix the problem described by this JIRA.

          -------------- THEORY ---------------------

          The following discussion relies on these definitions and assumptions:

          i) Invaliding events - These include "object dropped" and "object modified".

          ii) "A -> B" - This is a dependency arc. It is shorthand for "A depends on B". Invalidating events travel backward along the dependency arcs, allowing each object to decide how to respond to the event. Possible responses include: "raise an exception because RESTRICT semantics are violated" and "recompile me".

          iii) Dependency Graph - This is a graph of all dependency arcs needed by Derby. The nodes in this graph are the persistent objects plus PreparedStatements. There is an arrow from A to B iff "A -> B".

          iv) Transitivity - The Dependency Graph obeys the following rule:

          if "A -> B" and "B -> C", then "A -> C"

          v) SYSDEPENDS contains dependency arcs between persistent objects.

          vi) Sufficient - SYSDEPENDS is said to be sufficient if it contains enough dependency arcs to reconstruct the entire Dependency Graph. Note that SYSDEPENDS is not the only input to constructing the Dependency Graph. Some arcs are implicitly described by other catalogs. Transitivity can be used to construct further arcs.

          vii) Minimal - SYSDEPENDS is said to be minimal if it contains the smallest number of arcs needed to reconstruct the entire Dependency Graph. For instance, if SYSDEPENDS contains the arcs "A -> B" and "B -> C" then SYSDEPENDS does not need to contain the "A -> C" arc because Derby can reconstruct that arc from the Transitivity rule.

          viii) Fuzzy - SYSDEPENDS is said to be fuzzy if it contains arcs that are not in the Dependency Graph.

          I would venture the following:

          I) SYSDEPENDS should be Sufficient and not Fuzzy.

          II) Even if SYSDEPENDS is Sufficient, Derby may have a bug which prevents it from constructing the complete Dependency Graph. For instance, Derby may be ignoring relevant information in other catalogs.

          III) I do not believe that SYSDEPENDS is Minimal. When DDL creates new arcs in the Dependency Graph, Derby does not recompute the contents of SYSDEPENDS just to guarantee a Minimal representation.

          ------------- EXAMPLE ------------------

          Let's apply this to a trigger example.

          INSERTs into table T1 fire a trigger which INSERTs into table T2

          This example gives rise to the following persistent objects:

          Tables T1 and T2
          Corresponding conglomerates C1 and C2
          Trigger TR
          Action statement A

          The following would be a Minimal representation in SYSDEPENDS:

          TR -> T1
          A -> T2

          Note that the following additional arcs do not need to be modelled in SYSDEPENDS, but can be constructed by Derby from information in other catalogs:

          T1 -> C1
          C1 -> T1
          T2 -> C2
          C2 -> T2
          TR -> A
          A -> TR

          Other arcs arise via the Transitive rule.

          What we actually see in SYSDEPENDS is the following Sufficient, non-Minimal representation:

          TR -> T1
          TR -> A (non-Minimal, could be constructed from SYSTRIGGERS)
          A -> T1 (non-Minimal, could be constructed by Transitivity)
          A -> T2
          A -> C2 (non-Minimal, could be constructed by Transitivity)

          Here is a script which shows this example:

          connect 'jdbc:derby:memory:db;create=true';

          create table t1( a int );
          create table t2( a int );
          create trigger trig after insert on t1 for each statement insert into t2( a ) values( 1 );

          select * from sys.sysdepends order by dependentid, providerid;
          select tablename, tableid from sys.systables where tablename like 'T%';
          select t.tablename, c.conglomerateid
          from sys.systables t, sys.sysconglomerates c
          where tablename like 'T%'
          and t.tableid = c.tableid;
          select triggerid from sys.systriggers;

          Show
          Rick Hillegas added a comment - The following theory and example may help people reason about how to fix the problem described by this JIRA. -------------- THEORY --------------------- The following discussion relies on these definitions and assumptions: i) Invaliding events - These include "object dropped" and "object modified". ii) "A -> B" - This is a dependency arc. It is shorthand for "A depends on B". Invalidating events travel backward along the dependency arcs, allowing each object to decide how to respond to the event. Possible responses include: "raise an exception because RESTRICT semantics are violated" and "recompile me". iii) Dependency Graph - This is a graph of all dependency arcs needed by Derby. The nodes in this graph are the persistent objects plus PreparedStatements. There is an arrow from A to B iff "A -> B". iv) Transitivity - The Dependency Graph obeys the following rule: if "A -> B" and "B -> C", then "A -> C" v) SYSDEPENDS contains dependency arcs between persistent objects. vi) Sufficient - SYSDEPENDS is said to be sufficient if it contains enough dependency arcs to reconstruct the entire Dependency Graph. Note that SYSDEPENDS is not the only input to constructing the Dependency Graph. Some arcs are implicitly described by other catalogs. Transitivity can be used to construct further arcs. vii) Minimal - SYSDEPENDS is said to be minimal if it contains the smallest number of arcs needed to reconstruct the entire Dependency Graph. For instance, if SYSDEPENDS contains the arcs "A -> B" and "B -> C" then SYSDEPENDS does not need to contain the "A -> C" arc because Derby can reconstruct that arc from the Transitivity rule. viii) Fuzzy - SYSDEPENDS is said to be fuzzy if it contains arcs that are not in the Dependency Graph. I would venture the following: I) SYSDEPENDS should be Sufficient and not Fuzzy. II) Even if SYSDEPENDS is Sufficient, Derby may have a bug which prevents it from constructing the complete Dependency Graph. For instance, Derby may be ignoring relevant information in other catalogs. III) I do not believe that SYSDEPENDS is Minimal. When DDL creates new arcs in the Dependency Graph, Derby does not recompute the contents of SYSDEPENDS just to guarantee a Minimal representation. ------------- EXAMPLE ------------------ Let's apply this to a trigger example. INSERTs into table T1 fire a trigger which INSERTs into table T2 This example gives rise to the following persistent objects: Tables T1 and T2 Corresponding conglomerates C1 and C2 Trigger TR Action statement A The following would be a Minimal representation in SYSDEPENDS: TR -> T1 A -> T2 Note that the following additional arcs do not need to be modelled in SYSDEPENDS, but can be constructed by Derby from information in other catalogs: T1 -> C1 C1 -> T1 T2 -> C2 C2 -> T2 TR -> A A -> TR Other arcs arise via the Transitive rule. What we actually see in SYSDEPENDS is the following Sufficient, non-Minimal representation: TR -> T1 TR -> A (non-Minimal, could be constructed from SYSTRIGGERS) A -> T1 (non-Minimal, could be constructed by Transitivity) A -> T2 A -> C2 (non-Minimal, could be constructed by Transitivity) Here is a script which shows this example: connect 'jdbc:derby:memory:db;create=true'; create table t1( a int ); create table t2( a int ); create trigger trig after insert on t1 for each statement insert into t2( a ) values( 1 ); select * from sys.sysdepends order by dependentid, providerid; select tablename, tableid from sys.systables where tablename like 'T%'; select t.tablename, c.conglomerateid from sys.systables t, sys.sysconglomerates c where tablename like 'T%' and t.tableid = c.tableid; select triggerid from sys.systriggers;
          Hide
          Mamta A. Satoor added a comment -

          Thanks Rick and Dag on all the time you have put in this jira. I really appreciate that.

          As part of this jira though, I will like to focus on how to keep the dependency between trigger action sps and trigger table intact during recompile(which currently gets dropped during the recompile phase). As for the bigger picture of having only the necessary dependencies for triggers and checking if Derby is keeping only the required depdendencies, I will create a new jira for that. I will put Dag's and Rick's comments in this jira and will also link that new jira to the current jira.

          Show
          Mamta A. Satoor added a comment - Thanks Rick and Dag on all the time you have put in this jira. I really appreciate that. As part of this jira though, I will like to focus on how to keep the dependency between trigger action sps and trigger table intact during recompile(which currently gets dropped during the recompile phase). As for the bigger picture of having only the necessary dependencies for triggers and checking if Derby is keeping only the required depdendencies, I will create a new jira for that. I will put Dag's and Rick's comments in this jira and will also link that new jira to the current jira.
          Hide
          Dag H. Wanvik added a comment -

          Yes Mamta, I can see that the update statement would be the one failing with the current implementation (if the dependency of the action on the trigger table had not been dropped, as you point out). I think the behavior may be wrong, though, the time to fail or drop the trigger should probably at ALTER TABLE time. But that's another issue as you say. Meanwhile, I'm still curious why the dependency wouldn't/coudln't be added back when we reprepare, though, since the trigger action does contain reference to the trigger table, albeit in the form of the "old"/"oldrow" aliases (in your second example repro).

          Rick, in your explanation of extended (data) dependencies, some dependencies are bi-directional, mirroring a one-to-one relationhip. When would such bidirectional dependencies ever be required in SYSDEPENDS? I think in some cases we only register unidirectional ones? Is this "pragmatic", in the sense that we register deps in SYSDEPENDS (or in memory only) only in the (inverse) direction we want to invalidate?

          Show
          Dag H. Wanvik added a comment - Yes Mamta, I can see that the update statement would be the one failing with the current implementation (if the dependency of the action on the trigger table had not been dropped, as you point out). I think the behavior may be wrong, though, the time to fail or drop the trigger should probably at ALTER TABLE time. But that's another issue as you say. Meanwhile, I'm still curious why the dependency wouldn't/coudln't be added back when we reprepare, though, since the trigger action does contain reference to the trigger table, albeit in the form of the "old"/"oldrow" aliases (in your second example repro). Rick, in your explanation of extended (data) dependencies, some dependencies are bi-directional, mirroring a one-to-one relationhip. When would such bidirectional dependencies ever be required in SYSDEPENDS? I think in some cases we only register unidirectional ones? Is this "pragmatic", in the sense that we register deps in SYSDEPENDS (or in memory only) only in the (inverse) direction we want to invalidate?
          Hide
          Rick Hillegas added a comment -

          Hi Dag,

          I think that SYSDEPENDS exists in order to make it fast to reconstruct the Dependency Graph. It doesn't need to contain arcs representing the relationship between tables and conglomerates because those can be figured out quickly by looking at other catalogs. But the arc backward from a triggering table to its trigger is hard to reconstruct, so that arc is put into SYSDEPENDS.

          The issue of bidirectional arcs (cycles) is interesting. I don't know if there are any examples of this in SYSDEPENDS. However, the cycles should exist in the Dependency Graph I think. The example of tables and conglomerates comes to mind:

          T1 -> C1 models the fact that dropping the conglomerate (e.g. by bulk import) should invalidate the table.

          C1 -> T1 models the fact that dropping the table should also drop the conglomerate.

          I don't think that Derby systematically implements the Dependency Graph I have described. My impression is that dependency management is a hodge-podge today. Some of the relationships are managed by the dependency subsystem. Other relationships are managed by little winks and nods in other parts of the SQL interpreter. In my opinion this hodge-podge makes dependency management brittle and hard to reason about.

          Hi Mamta,

          I may not be following your analysis. But it sounds to me as though the following arc has vanished from SYSDEPENDS:

          A -> triggeringTable

          I am suggesting that this is a non-Minimal arc and doesn't need to be in SYSDEPENDS. Derby should be able to construct this arc on the fly via Transitivity from the following arcs:

          A -> TR (this is known from SYSTRIGGERS)
          TR -> triggeringTable (this is still in SYSDEPENDS if I understand correctly)

          Thanks,
          -Rick

          Show
          Rick Hillegas added a comment - Hi Dag, I think that SYSDEPENDS exists in order to make it fast to reconstruct the Dependency Graph. It doesn't need to contain arcs representing the relationship between tables and conglomerates because those can be figured out quickly by looking at other catalogs. But the arc backward from a triggering table to its trigger is hard to reconstruct, so that arc is put into SYSDEPENDS. The issue of bidirectional arcs (cycles) is interesting. I don't know if there are any examples of this in SYSDEPENDS. However, the cycles should exist in the Dependency Graph I think. The example of tables and conglomerates comes to mind: T1 -> C1 models the fact that dropping the conglomerate (e.g. by bulk import) should invalidate the table. C1 -> T1 models the fact that dropping the table should also drop the conglomerate. I don't think that Derby systematically implements the Dependency Graph I have described. My impression is that dependency management is a hodge-podge today. Some of the relationships are managed by the dependency subsystem. Other relationships are managed by little winks and nods in other parts of the SQL interpreter. In my opinion this hodge-podge makes dependency management brittle and hard to reason about. Hi Mamta, I may not be following your analysis. But it sounds to me as though the following arc has vanished from SYSDEPENDS: A -> triggeringTable I am suggesting that this is a non-Minimal arc and doesn't need to be in SYSDEPENDS. Derby should be able to construct this arc on the fly via Transitivity from the following arcs: A -> TR (this is known from SYSTRIGGERS) TR -> triggeringTable (this is still in SYSDEPENDS if I understand correctly) Thanks, -Rick
          Hide
          Mamta A. Satoor added a comment -

          Created a new jira to see if dependency info in SYSDEPENDS can be cleaned up a little with all the great info provided by Rick.

          The new jira number is DERBY-5323 SYSDEPENDS may be keeping redundant dependency info. Specific informatio​n for trigger case in this jira but there might be other cases as well.

          I have connected the 2 jiras.

          Show
          Mamta A. Satoor added a comment - Created a new jira to see if dependency info in SYSDEPENDS can be cleaned up a little with all the great info provided by Rick. The new jira number is DERBY-5323 SYSDEPENDS may be keeping redundant dependency info. Specific informatio​n for trigger case in this jira but there might be other cases as well. I have connected the 2 jiras.
          Hide
          Mamta A. Satoor added a comment -

          Rick, I was planning to keep my focus on just maintaining the dependency that was originally intended for trigger. At the create trigger time, we add the dependency between trigger action and trigger table. We loose that during trigger recompile. I think this fix will be real easy by having SPSDescriptor add this dependency info rather than create trigger do it. This will take care of both create trigger and trigger action recompile because of invalidity. Let me know if you, Dag or anyone else have any feedback on moving the trigger action -> trigger table dependency to SPSDescriptor from Create Trigger. Thanks

          I agree that some of these dependencies can probably be established programmatically rather than SYSDEPENDS having to carry it. But I will like that work to go as part of the new jira DERBY-5323.

          Show
          Mamta A. Satoor added a comment - Rick, I was planning to keep my focus on just maintaining the dependency that was originally intended for trigger. At the create trigger time, we add the dependency between trigger action and trigger table. We loose that during trigger recompile. I think this fix will be real easy by having SPSDescriptor add this dependency info rather than create trigger do it. This will take care of both create trigger and trigger action recompile because of invalidity. Let me know if you, Dag or anyone else have any feedback on moving the trigger action -> trigger table dependency to SPSDescriptor from Create Trigger. Thanks I agree that some of these dependencies can probably be established programmatically rather than SYSDEPENDS having to carry it. But I will like that work to go as part of the new jira DERBY-5323 .
          Hide
          Rick Hillegas added a comment -

          Hi Mamta,

          Your plan sounds good to me. Thanks.

          Show
          Rick Hillegas added a comment - Hi Mamta, Your plan sounds good to me. Thanks.
          Hide
          Mamta A. Satoor added a comment -

          I am attaching a patch which moves recording to trigger action sps's dependency on trigger table from create trigger constant action to SPSDescriptor. This should take care of both create trigger time and sps regeneration time. I am seeing couple failures with this patch. I hope to have those tests fixed by tomorrow.

          Show
          Mamta A. Satoor added a comment - I am attaching a patch which moves recording to trigger action sps's dependency on trigger table from create trigger constant action to SPSDescriptor. This should take care of both create trigger time and sps regeneration time. I am seeing couple failures with this patch. I hope to have those tests fixed by tomorrow.
          Hide
          Mamta A. Satoor added a comment -

          I am attaching another patch which has two test problems resolved which I saw seeing with the earlier patch. The two tests that were failing with patch 1 were 1)AlterTableTest,testDropColumn This was failing because the number of rows in sysdepends were not as expected by the test. I found the problem was that AlterTableConstantAction does a drop and recreate trigger and it was adding the dependency between trigger action sps and trigger table. It does not need to add that dependency anymore because it is taken care of by SPSDescriptor. Removing the dependency add code from AlterTableConstantAction resolved the problem.
          2)triggerGeneral.sql was failing because a trigger action was using select * from trigger table to insert into another table. Later an alter table was performed to add one more column to the trigger table. With missing dependency before, trigger action was not getting regenerated and hence trigger didn't detect that it is not valid anymore. After I have made the changes for the dependency, trigger action gets invalidated when a new column is added to trigger table and this will cause the trigger to fail.I have fixed the test to not do select * in trigger action so add column will not impact it. The test in triggerGeneral has been written for a specific purpose and that is why I didn't want the test to fail with select *. I will add another test which will catch a case like insert into select * in trigger action and it getting invalid with add column.

          The only tasks left on this jira is to add the test I mentioned. Additionally, I plan to add an upgrade test.

          If anyone has any feedback, please let me know.

          Show
          Mamta A. Satoor added a comment - I am attaching another patch which has two test problems resolved which I saw seeing with the earlier patch. The two tests that were failing with patch 1 were 1)AlterTableTest,testDropColumn This was failing because the number of rows in sysdepends were not as expected by the test. I found the problem was that AlterTableConstantAction does a drop and recreate trigger and it was adding the dependency between trigger action sps and trigger table. It does not need to add that dependency anymore because it is taken care of by SPSDescriptor. Removing the dependency add code from AlterTableConstantAction resolved the problem. 2)triggerGeneral.sql was failing because a trigger action was using select * from trigger table to insert into another table. Later an alter table was performed to add one more column to the trigger table. With missing dependency before, trigger action was not getting regenerated and hence trigger didn't detect that it is not valid anymore. After I have made the changes for the dependency, trigger action gets invalidated when a new column is added to trigger table and this will cause the trigger to fail.I have fixed the test to not do select * in trigger action so add column will not impact it. The test in triggerGeneral has been written for a specific purpose and that is why I didn't want the test to fail with select *. I will add another test which will catch a case like insert into select * in trigger action and it getting invalid with add column. The only tasks left on this jira is to add the test I mentioned. Additionally, I plan to add an upgrade test. If anyone has any feedback, please let me know.
          Hide
          Mamta A. Satoor added a comment -

          Adding a new patch which now has a test for the changes.Next,will work on an upgrade test

          Show
          Mamta A. Satoor added a comment - Adding a new patch which now has a test for the changes.Next,will work on an upgrade test
          Hide
          Mamta A. Satoor added a comment -

          Attaching patch DERBY5120_patch4_diff.txt which now also has an upgrade test. The upgrade test's intention is to verify that sysdepedns row that may have been dropped of in earlier releases will be restored during the various phases of upgrade. In other words, when a pre-10.9 db is soft/hard upgraded to trunk, the missing row will be added back into sysdepends. But after the soft-upgrade, if the db is taken back to it's original release, the row will be lost again.I am verifying this by looking at the count of number of rows in sysdepends.

          I am running into one issue with this test though. There are few releases where the rows in sysdepends do not match with what the test expects them to. I am doing more debugging to find what might be the issue.

          Show
          Mamta A. Satoor added a comment - Attaching patch DERBY5120_patch4_diff.txt which now also has an upgrade test. The upgrade test's intention is to verify that sysdepedns row that may have been dropped of in earlier releases will be restored during the various phases of upgrade. In other words, when a pre-10.9 db is soft/hard upgraded to trunk, the missing row will be added back into sysdepends. But after the soft-upgrade, if the db is taken back to it's original release, the row will be lost again.I am verifying this by looking at the count of number of rows in sysdepends. I am running into one issue with this test though. There are few releases where the rows in sysdepends do not match with what the test expects them to. I am doing more debugging to find what might be the issue.
          Hide
          Mamta A. Satoor added a comment - - edited

          I have found that the new upgrade test added in patch 4 fails for 10.5.1.1, 10.5.3.0, 10.6.1.0 and 10.6.2.1 releases. I do not believe the problem is with my changes, I am debugging further to find out why the test fails only with these 4 releases. The test is failing for those 4 releases because the number of rows expected by the test do not match with what's actually found in the database.

          Show
          Mamta A. Satoor added a comment - - edited I have found that the new upgrade test added in patch 4 fails for 10.5.1.1, 10.5.3.0, 10.6.1.0 and 10.6.2.1 releases. I do not believe the problem is with my changes, I am debugging further to find out why the test fails only with these 4 releases. The test is failing for those 4 releases because the number of rows expected by the test do not match with what's actually found in the database.
          Hide
          Mamta A. Satoor added a comment -

          Attaching final patch which is ready for commit. The reason newly added upgrade test(from patch 4) is failing with 10.5.1.1, 10.5.3.0, 10.6.1.0 and 10.6.2.1 releases is that those releases have DERBY-4835 fix missing from them.

          During the upgrade time, the clearing of stored statements(including trigger action spses) happened conditionally before DERBY-4835 was fixed. DERBY-4835 made changes so that the stored statements get marked invalid unconditionally during the upgrade phase. But these changes for DERBY-4835 did not make into 10.5.1.1, 10.5.3.0, 10.6.1.0 and 10.6.2.1. Because of this missing fix, trigger action spses do not get marked invalid when the database is taken after soft upgrade back to the original db release(if the original db release is one of the releases mentioned above). The newly added upgrade test relies on trigger action spses getting invalid during upgrade phase and getting recompiled when they are fired next time around thus altering the number of rows in sysdepends. Because of this, I have disabled the upgrade test for those 4 releases.

          I will go ahead and commit this patch

          Show
          Mamta A. Satoor added a comment - Attaching final patch which is ready for commit. The reason newly added upgrade test(from patch 4) is failing with 10.5.1.1, 10.5.3.0, 10.6.1.0 and 10.6.2.1 releases is that those releases have DERBY-4835 fix missing from them. During the upgrade time, the clearing of stored statements(including trigger action spses) happened conditionally before DERBY-4835 was fixed. DERBY-4835 made changes so that the stored statements get marked invalid unconditionally during the upgrade phase. But these changes for DERBY-4835 did not make into 10.5.1.1, 10.5.3.0, 10.6.1.0 and 10.6.2.1. Because of this missing fix, trigger action spses do not get marked invalid when the database is taken after soft upgrade back to the original db release(if the original db release is one of the releases mentioned above). The newly added upgrade test relies on trigger action spses getting invalid during upgrade phase and getting recompiled when they are fired next time around thus altering the number of rows in sysdepends. Because of this, I have disabled the upgrade test for those 4 releases. I will go ahead and commit this patch
          Hide
          Mamta A. Satoor added a comment -

          BTW, when I run the junite suite with patch number 5, I see only known intermittent failures as shown below. derbyall ran fine with no errors.
          1) testBootLock(org.apache.derbyTesting.functionTests.tests.store.BootLockTest)java.lang.InterruptedException
          at org.apache.derbyTesting.functionTests.tests.store.BootLockTest.testBootLock(BootLockTest.java:158)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
          at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:112)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:16)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:16)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:16)
          2) ManagementMBeanTest:clientjava.lang.InterruptedException
          at org.apache.derbyTesting.junit.SpawnedProcess.complete(SpawnedProcess.java:182)
          at org.apache.derbyTesting.junit.NetworkServerTestSetup.tearDown(NetworkServerTestSetup.java:394)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:20)
          at junit.extensions.TestSetup.run(TestSetup.java:23)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:51)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:16)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:16)
          There were 3 failures:
          1) testPingWithWrongHost(org.apache.derbyTesting.functionTests.tests.derbynet.NetworkServerControlClientCommandTest)junit.framework.AssertionFailedError: Could not find expectedString:Unable to find host in output:Wed Jul 13 19:37:36 PDT 2011 : Could not connect to Derby Network Server on host nothere, port 1527: Connection timed out: connect

          at org.apache.derbyTesting.junit.BaseTestCase.assertExecJavaCmdAsExpected(BaseTestCase.java:516)
          at org.apache.derbyTesting.functionTests.tests.derbynet.NetworkServerControlClientCommandTest.assertFailedPing(NetworkServerControlClientCommandTest.java:147)
          at org.apache.derbyTesting.functionTests.tests.derbynet.NetworkServerControlClientCommandTest.testPingWithWrongHost(NetworkServerControlClientCommandTest.java:112)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
          at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:112)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:23)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:23)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:23)
          2) testInvalidLDAPServerConnectionError(org.apache.derbyTesting.functionTests.tests.jdbcapi.InvalidLDAPServerAuthenticationTest)junit.framework.AssertionFailedError
          at org.apache.derbyTesting.functionTests.tests.jdbcapi.InvalidLDAPServerAuthenticationTest.testInvalidLDAPServerConnectionError(InvalidLDAPServerAuthenticationTest.java:122)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
          at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:112)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:16)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:16)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:16)
          3) testInvalidLDAPServerConnectionErrorjunit.framework.AssertionFailedError: Failed to delete 8 files (root=C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48: C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\db.lck (isDir=false, canRead=true, canWrite=true, size=38), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\log\log1.dat (isDir=false, canRead=true, canWrite=true, size=1048576), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\log (isDir=true, canRead=true, canWrite=true, size=0), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\seg0\c10.dat (isDir=false, canRead=true, canWrite=true, size=8192), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\seg0\cc0.dat (isDir=false, canRead=true, canWrite=true, size=8192), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\seg0\cd1.dat (isDir=false, canRead=true, canWrite=true, size=8192), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\seg0 (isDir=true, canRead=true, canWrite=true, size=0), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48 (isDir=true, canRead=true, canWrite=true, size=0)
          at org.apache.derbyTesting.junit.BaseJDBCTestCase.assertDirectoryDeleted(BaseJDBCTestCase.java:1526)
          at org.apache.derbyTesting.junit.DropDatabaseSetup.removeDir(DropDatabaseSetup.java:119)
          at org.apache.derbyTesting.junit.DropDatabaseSetup.access$000(DropDatabaseSetup.java:35)
          at org.apache.derbyTesting.junit.DropDatabaseSetup$1.run(DropDatabaseSetup.java:105)
          at java.security.AccessController.doPrivileged(AccessController.java:202)
          at org.apache.derbyTesting.junit.DropDatabaseSetup.removeDirectory(DropDatabaseSetup.java:102)
          at org.apache.derbyTesting.junit.DropDatabaseSetup.removeDirectory(DropDatabaseSetup.java:98)
          at org.apache.derbyTesting.junit.DropDatabaseSetup.removeDatabase(DropDatabaseSetup.java:91)
          at org.apache.derbyTesting.junit.TestConfiguration$4.tearDown(TestConfiguration.java:747)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:20)
          at junit.extensions.TestSetup.run(TestSetup.java:16)
          at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:16)
          at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
          at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
          at junit.extensions.TestSetup.run(TestSetup.java:16)

          Show
          Mamta A. Satoor added a comment - BTW, when I run the junite suite with patch number 5, I see only known intermittent failures as shown below. derbyall ran fine with no errors. 1) testBootLock(org.apache.derbyTesting.functionTests.tests.store.BootLockTest)java.lang.InterruptedException at org.apache.derbyTesting.functionTests.tests.store.BootLockTest.testBootLock(BootLockTest.java:158) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:112) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16) 2) ManagementMBeanTest:clientjava.lang.InterruptedException at org.apache.derbyTesting.junit.SpawnedProcess.complete(SpawnedProcess.java:182) at org.apache.derbyTesting.junit.NetworkServerTestSetup.tearDown(NetworkServerTestSetup.java:394) at junit.extensions.TestSetup$1.protect(TestSetup.java:20) at junit.extensions.TestSetup.run(TestSetup.java:23) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:51) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16) There were 3 failures: 1) testPingWithWrongHost(org.apache.derbyTesting.functionTests.tests.derbynet.NetworkServerControlClientCommandTest)junit.framework.AssertionFailedError: Could not find expectedString:Unable to find host in output:Wed Jul 13 19:37:36 PDT 2011 : Could not connect to Derby Network Server on host nothere, port 1527: Connection timed out: connect at org.apache.derbyTesting.junit.BaseTestCase.assertExecJavaCmdAsExpected(BaseTestCase.java:516) at org.apache.derbyTesting.functionTests.tests.derbynet.NetworkServerControlClientCommandTest.assertFailedPing(NetworkServerControlClientCommandTest.java:147) at org.apache.derbyTesting.functionTests.tests.derbynet.NetworkServerControlClientCommandTest.testPingWithWrongHost(NetworkServerControlClientCommandTest.java:112) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:112) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:23) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:23) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:23) 2) testInvalidLDAPServerConnectionError(org.apache.derbyTesting.functionTests.tests.jdbcapi.InvalidLDAPServerAuthenticationTest)junit.framework.AssertionFailedError at org.apache.derbyTesting.functionTests.tests.jdbcapi.InvalidLDAPServerAuthenticationTest.testInvalidLDAPServerConnectionError(InvalidLDAPServerAuthenticationTest.java:122) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:112) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16) 3) testInvalidLDAPServerConnectionErrorjunit.framework.AssertionFailedError: Failed to delete 8 files (root=C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48: C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\db.lck (isDir=false, canRead=true, canWrite=true, size=38), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\log\log1.dat (isDir=false, canRead=true, canWrite=true, size=1048576), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\log (isDir=true, canRead=true, canWrite=true, size=0), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\seg0\c10.dat (isDir=false, canRead=true, canWrite=true, size=8192), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\seg0\cc0.dat (isDir=false, canRead=true, canWrite=true, size=8192), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\seg0\cd1.dat (isDir=false, canRead=true, canWrite=true, size=8192), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48\seg0 (isDir=true, canRead=true, canWrite=true, size=0), C:\p4clients\svnmain\client7\trunk\systest\dellater\system\singleUse\oneuse48 (isDir=true, canRead=true, canWrite=true, size=0) at org.apache.derbyTesting.junit.BaseJDBCTestCase.assertDirectoryDeleted(BaseJDBCTestCase.java:1526) at org.apache.derbyTesting.junit.DropDatabaseSetup.removeDir(DropDatabaseSetup.java:119) at org.apache.derbyTesting.junit.DropDatabaseSetup.access$000(DropDatabaseSetup.java:35) at org.apache.derbyTesting.junit.DropDatabaseSetup$1.run(DropDatabaseSetup.java:105) at java.security.AccessController.doPrivileged(AccessController.java:202) at org.apache.derbyTesting.junit.DropDatabaseSetup.removeDirectory(DropDatabaseSetup.java:102) at org.apache.derbyTesting.junit.DropDatabaseSetup.removeDirectory(DropDatabaseSetup.java:98) at org.apache.derbyTesting.junit.DropDatabaseSetup.removeDatabase(DropDatabaseSetup.java:91) at org.apache.derbyTesting.junit.TestConfiguration$4.tearDown(TestConfiguration.java:747) at junit.extensions.TestSetup$1.protect(TestSetup.java:20) at junit.extensions.TestSetup.run(TestSetup.java:16) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22) at junit.extensions.TestSetup$1.protect(TestSetup.java:19) at junit.extensions.TestSetup.run(TestSetup.java:16)
          Hide
          Mamta A. Satoor added a comment -

          I haved committed changes for this jira as revision 1146915. The changes committed look like patch 5 except that I have changed the upgrade test to not rely on number of rows in sysdepends. Instead the upgrade test does the testing of this jira by issuing the UPDATE sql after a sql which should cause trigger invalidation and showing how UPDATE does not fail in pre-10.9 releases but it does fail correctly after soft/hard upgrade to 10.9

          Show
          Mamta A. Satoor added a comment - I haved committed changes for this jira as revision 1146915. The changes committed look like patch 5 except that I have changed the upgrade test to not rely on number of rows in sysdepends. Instead the upgrade test does the testing of this jira by issuing the UPDATE sql after a sql which should cause trigger invalidation and showing how UPDATE does not fail in pre-10.9 releases but it does fail correctly after soft/hard upgrade to 10.9
          Hide
          Knut Anders Hatlen added a comment -

          [bulk update] Close all resolved issues that haven't been updated for more than one year.

          Show
          Knut Anders Hatlen added a comment - [bulk update] Close all resolved issues that haven't been updated for more than one year.

            People

            • Assignee:
              Mamta A. Satoor
              Reporter:
              Mamta A. Satoor
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development