Uploaded image for project: 'Derby'
  1. Derby
  2. DERBY-4437

Concurrent inserts into table with identity column perform poorly

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Duplicate
    • Affects Version/s: 10.5.3.0
    • Fix Version/s: 10.9.1.0
    • Component/s: SQL
    • Labels:
      None
    • Issue & fix info:
      Release Note Needed
    • Bug behavior facts:
      Performance

      Description

      I have a multi-threaded application which is very insert-intensive. I've noticed that it sometimes can come into a state where it slows down considerably and basically becomes single-threaded. This is especially harmful on modern multi-core machines since most of the available resources are left idle.

      The problematic tables contain identity columns, and here's my understanding of what happens:

      1) Identity columns are generated from a counter that's stored in a row in SYS.SYSCOLUMNS. During normal operation, the counter is maintained in a nested transaction within the transaction that performs the insert. This allows the nested transaction to commit the changes to SYS.SYSCOLUMN separately from the main transaction, and the exclusive lock that it needs to obtain on the row holding the counter, can be releases after a relatively short time. Concurrent transactions can therefore insert into the same table at the same time, without needing to wait for the others to commit or abort.

      2) However, if the nested transaction cannot lock the row in SYS.SYSCOLUMNS immediately, it will give up and retry the operation in the main transaction. This prevents self-deadlocks in the case where the main transaction already owns a lock on SYS.SYSCOLUMNS. Unfortunately, this also increases the time the row is locked, since the exclusive lock cannot be released until the main transaction commits. So as soon as there is one lock collision, the waiting transaction changes to a locking mode that increases the chances of others having to wait, which seems to result in all insert threads having to obtain the SYSCOLUMNS locks in the main transaction. The end result is that only one of the insert threads can execute at any given time as long as the application is in this state.

      1. D4437PerfTest.java
        3 kB
        Knut Anders Hatlen
      2. D4437PerfTest2.java
        3 kB
        Knut Anders Hatlen
      3. derby-4437-01-aj-allTestsPass.diff
        43 kB
        Rick Hillegas
      4. derby-4437-02-ac-alterTable-bulkImport-deferredInsert.diff
        11 kB
        Rick Hillegas
      5. derby-4437-03-aa-upgradeTest.diff
        7 kB
        Rick Hillegas
      6. derby-4437-04-aa-reclaimUnusedValuesOnShutdown.diff
        4 kB
        Rick Hillegas
      7. derby-4437-05-aa-pluggablePreallocation.diff
        24 kB
        Rick Hillegas
      8. derby-4437-06-aa-selfTuning.diff
        10 kB
        Rick Hillegas
      9. derby-4437-07-ac-biggerDefault_propertyCanBeInteger.diff
        13 kB
        Rick Hillegas
      10. derby-4437-07-ad-biggerDefault_propertyCanBeInteger.diff
        13 kB
        Rick Hillegas
      11. derby-4437-08-aa-10.8upgrade.diff
        11 kB
        Rick Hillegas
      12. Experiments_4437.html
        4 kB
        Rick Hillegas
      13. insertperf.png
        6 kB
        Knut Anders Hatlen
      14. insertperf2.png
        7 kB
        Knut Anders Hatlen
      15. prealloc.png
        9 kB
        Knut Anders Hatlen
      16. releaseNote.html
        4 kB
        Rick Hillegas

        Issue Links

          Activity

          Hide
          knutanders Knut Anders Hatlen added a comment -

          I haven't investigated this enough to say if (1) or (2) is the real problem. Since the nested transaction has to commit before it can release the lock, and a commit may need to wait for disk I/O operations, it may be that (2) is just a symptom, and the real problem is that all the insert threads compete for the same row lock.

          In my application, I could work around the problem by removing the identity column and instead maintain a counter in an AtomicInteger that's initialized by a SELECT MAX(id) query on start-up. This works because the application is one single process with multiple threads, so all threads have access to the AtomicInteger. If the clients run in different processes, such a workaround cannot be used, though.

          Show
          knutanders Knut Anders Hatlen added a comment - I haven't investigated this enough to say if (1) or (2) is the real problem. Since the nested transaction has to commit before it can release the lock, and a commit may need to wait for disk I/O operations, it may be that (2) is just a symptom, and the real problem is that all the insert threads compete for the same row lock. In my application, I could work around the problem by removing the identity column and instead maintain a counter in an AtomicInteger that's initialized by a SELECT MAX(id) query on start-up. This works because the application is one single process with multiple threads, so all threads have access to the AtomicInteger. If the clients run in different processes, such a workaround cannot be used, though.
          Hide
          mikem Mike Matrigali added a comment -

          I would not be surprised if good performance gains could be gotten in this area, as I don't believe any optimization has happened. The code definitely predates today's processors and multi-core machines.

          I agree with both your assessments.

          Some work that could be done in this area:

          1) The system tries to limit the number of times that it goes single threaded by allocating
          a group of numbers every time it goes to update the system catalog. This number
          is probably too low for a multicore insert as fast as it can system. As a test you
          could try to just bump this number to make sure it helps your app. A better derby
          fix would be to make the fix somehow more zero-admin. Maybe by tracking how
          often the value is being updated and dynamically bump it up and down. Up seems
          easy, not exactly sure how to make it go down. The downside of a big number is
          that values are lost when the system shuts down.

          2) The current lock strategy is based on what was available from the lock manager
          when it was implemented. There may be better options. What the system really
          wants to do is to do an unlimited wait unless it is waiting on itself. For a normal
          application that does not do system catalog queries the normal case is that a hit
          on this lock is not going to be a self deadlock. So a quick fix might be to add a
          retry, or add a longer wait on the lock. A best fix would be a new lock manager
          interfaces that allowed it to wait for as long as needed while insuring it was not
          waiting on parent transaction.

          Show
          mikem Mike Matrigali added a comment - I would not be surprised if good performance gains could be gotten in this area, as I don't believe any optimization has happened. The code definitely predates today's processors and multi-core machines. I agree with both your assessments. Some work that could be done in this area: 1) The system tries to limit the number of times that it goes single threaded by allocating a group of numbers every time it goes to update the system catalog. This number is probably too low for a multicore insert as fast as it can system. As a test you could try to just bump this number to make sure it helps your app. A better derby fix would be to make the fix somehow more zero-admin. Maybe by tracking how often the value is being updated and dynamically bump it up and down. Up seems easy, not exactly sure how to make it go down. The downside of a big number is that values are lost when the system shuts down. 2) The current lock strategy is based on what was available from the lock manager when it was implemented. There may be better options. What the system really wants to do is to do an unlimited wait unless it is waiting on itself. For a normal application that does not do system catalog queries the normal case is that a hit on this lock is not going to be a self deadlock. So a quick fix might be to add a retry, or add a longer wait on the lock. A best fix would be a new lock manager interfaces that allowed it to wait for as long as needed while insuring it was not waiting on parent transaction.
          Hide
          rhillegas Rick Hillegas added a comment -

          Hopefully, the solution will be something that we can re-use for sequence generators (DERBY-712). As I read the SQL Standard, a sequence should "normally" not have any gaps but no guarantees are made and it is hard to understand how holes won't turn up since the sequence is not affected by rollbacks and the sequence is supposed to change monotonically in one direction or another.

          Pre-allocating a block of sequence numbers (Mike's solution #1) is attractive, particularly if we can release the unused ids when the database is brought down in an orderly fashion. I like the idea that the size of that block is self-tuning.

          Show
          rhillegas Rick Hillegas added a comment - Hopefully, the solution will be something that we can re-use for sequence generators ( DERBY-712 ). As I read the SQL Standard, a sequence should "normally" not have any gaps but no guarantees are made and it is hard to understand how holes won't turn up since the sequence is not affected by rollbacks and the sequence is supposed to change monotonically in one direction or another. Pre-allocating a block of sequence numbers (Mike's solution #1) is attractive, particularly if we can release the unused ids when the database is brought down in an orderly fashion. I like the idea that the size of that block is self-tuning.
          Hide
          bbergquist Brett Bergquist added a comment -

          This bug is really kill us. We have transaction rates of around 30 inserts/second now and some are done in parallel and now about every couple of days, the database server gets into this state. I am working around by discontinuing using IDENTITY columns but that requires a long down time on the system to convert the database and this is a continuously up system so that is hard to come by.

          Any solution via a patch or compling derby myself would be greatly appreciated. Much quicker to stop and drop in a new jar and restart than convert about 18 millon records that are using the identity column.

          Show
          bbergquist Brett Bergquist added a comment - This bug is really kill us. We have transaction rates of around 30 inserts/second now and some are done in parallel and now about every couple of days, the database server gets into this state. I am working around by discontinuing using IDENTITY columns but that requires a long down time on the system to convert the database and this is a continuously up system so that is hard to come by. Any solution via a patch or compling derby myself would be greatly appreciated. Much quicker to stop and drop in a new jar and restart than convert about 18 millon records that are using the identity column.
          Hide
          knutanders Knut Anders Hatlen added a comment -

          A mechanism for allocating sequence numbers without blocking other threads was developed in DERBY-712. If someone wants to work on a similar solution for identity columns, they can probably reuse much of that code.

          Show
          knutanders Knut Anders Hatlen added a comment - A mechanism for allocating sequence numbers without blocking other threads was developed in DERBY-712 . If someone wants to work on a similar solution for identity columns, they can probably reuse much of that code.
          Hide
          rhillegas Rick Hillegas added a comment -

          If we re-use the mechanism from DERBY-712, then we will probably see more holes in the lists of generated identity values. Holes will appear when the engine is bounced and some number of pre-allocated identity values are thrown away. I think this behavioral change would be ok for a maintenance release like 10.8.2. Other opinions?

          With sequences, the number of pre-allocated values is hard-coded to be 5. With a little effort, we could add an api to configure that number. I think it would be ok to use the same hard-coded number of pre-allocated values for identity columns, too. We can consider adding an api to configure this number after we gain experience with how pre-allocation behaves in the wild. Other opinions?

          Thanks,
          -Rick

          Show
          rhillegas Rick Hillegas added a comment - If we re-use the mechanism from DERBY-712 , then we will probably see more holes in the lists of generated identity values. Holes will appear when the engine is bounced and some number of pre-allocated identity values are thrown away. I think this behavioral change would be ok for a maintenance release like 10.8.2. Other opinions? With sequences, the number of pre-allocated values is hard-coded to be 5. With a little effort, we could add an api to configure that number. I think it would be ok to use the same hard-coded number of pre-allocated values for identity columns, too. We can consider adding an api to configure this number after we gain experience with how pre-allocation behaves in the wild. Other opinions? Thanks, -Rick
          Hide
          dagw Dag H. Wanvik added a comment -

          The change seems a good way forward. I presume that with the API/knobs, you could get the present behavior?
          I am somewhat hesitant to OK the behavior change in a minor release, though. Choosing a default for the knob to mimic the old behavior would be fine of course, but that might not be the most sensible default going forward. Perhaps it would be OK to change the default to a more sensible value in 10.9? This would allow us to introduce the optimizations for those who need it in 10.8.2.

          Show
          dagw Dag H. Wanvik added a comment - The change seems a good way forward. I presume that with the API/knobs, you could get the present behavior? I am somewhat hesitant to OK the behavior change in a minor release, though. Choosing a default for the knob to mimic the old behavior would be fine of course, but that might not be the most sensible default going forward. Perhaps it would be OK to change the default to a more sensible value in 10.9? This would allow us to introduce the optimizations for those who need it in 10.8.2.
          Hide
          rhillegas Rick Hillegas added a comment -

          Attaching derby-4437-01-aj-allTestsPass.diff. This patch replaces the old identity column management with a scheme based on the sequence generators which were introduced by DERBY-712. Regression tests passed for me. More tests need to be written, edge cases need to be stressed, and some dead code may need to be pruned out.

          This patch does the following:

          1) Introduces a new subclass of SequenceUpdater to manage identity values in SYSCOLUMNS rows.

          2) Removes the old identity management and replaces it with calls to the new SequenceUpdater.

          3) No persistent forms were changed so this patch should not affect the user's ability to upgrade and soft-downgrade.

          The SYSCOLUMNS SequenceUpdater behaves just like the SYSSEQUENCES one: It pre-allocates ranges of identity values. The number of pre-allocated values is hard-coded to the same number of pre-allocated values used for sequences (5).

          Touches the following files:

          -----------

          M java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java

          Introduces a SequenceUpdater to manage SYSCOLUMNS. There was already a SequenceUpdater to manage SYSSEQUENCES.

          -----------

          M java/engine/org/apache/derby/impl/sql/compile/CreateSequenceNode.java
          M java/engine/org/apache/derby/iapi/sql/dictionary/SequenceDescriptor.java

          Logic which computes max/min bounds for integer types was moved into a subroutine for re-use.

          -----------

          M java/storeless/org/apache/derby/impl/storeless/EmptyDictionary.java
          M java/engine/org/apache/derby/impl/sql/compile/NextSequenceNode.java
          M java/engine/org/apache/derby/impl/sql/execute/BaseActivation.java
          M java/engine/org/apache/derby/iapi/sql/dictionary/DataDictionary.java

          An extra argument was added to DataDictionary.getCurrentValueAndAdvance() so that the method can be used for both sequences and identity columns. Some obsolete methods were removed.

          -----------

          M java/engine/org/apache/derby/iapi/reference/Property.java
          M java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java

          Added cache management for identity SequenceUpdaters.

          -----------

          M java/engine/org/apache/derby/impl/sql/execute/InsertConstantAction.java

          Some unused methods were removed. The array of RowLocations was left untouched and is still constructed by InsertNode. Leaving this array intact avoids the need to change the serialized form of this ConstantAction. That eliminates soft-upgrade/soft-downgrade problems.

          -----------

          M java/engine/org/apache/derby/impl/sql/execute/InsertResultSet.java

          Replaced the old identity management with calls to the SequenceUpdaters cached in the DataDictionary.

          -----------

          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AlterTableTest.java
          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AutoIncrementTest.java

          The existing test needed some tweaking:

          1) The preallocation of identity ranges changes the results of queries against SYSCOLUMNS.

          2) Fewer locks are held now, changing the results of queries against the lock vti.

          3) The biggest value in a BIGINT identity column used to be (Long.MAX_VALUE - 1). Now it is Long.MAX_VALUE, as it should be. I don't understand why a wrong result was canonized in AutoIncrementTest.

          Show
          rhillegas Rick Hillegas added a comment - Attaching derby-4437-01-aj-allTestsPass.diff. This patch replaces the old identity column management with a scheme based on the sequence generators which were introduced by DERBY-712 . Regression tests passed for me. More tests need to be written, edge cases need to be stressed, and some dead code may need to be pruned out. This patch does the following: 1) Introduces a new subclass of SequenceUpdater to manage identity values in SYSCOLUMNS rows. 2) Removes the old identity management and replaces it with calls to the new SequenceUpdater. 3) No persistent forms were changed so this patch should not affect the user's ability to upgrade and soft-downgrade. The SYSCOLUMNS SequenceUpdater behaves just like the SYSSEQUENCES one: It pre-allocates ranges of identity values. The number of pre-allocated values is hard-coded to the same number of pre-allocated values used for sequences (5). Touches the following files: ----------- M java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java Introduces a SequenceUpdater to manage SYSCOLUMNS. There was already a SequenceUpdater to manage SYSSEQUENCES. ----------- M java/engine/org/apache/derby/impl/sql/compile/CreateSequenceNode.java M java/engine/org/apache/derby/iapi/sql/dictionary/SequenceDescriptor.java Logic which computes max/min bounds for integer types was moved into a subroutine for re-use. ----------- M java/storeless/org/apache/derby/impl/storeless/EmptyDictionary.java M java/engine/org/apache/derby/impl/sql/compile/NextSequenceNode.java M java/engine/org/apache/derby/impl/sql/execute/BaseActivation.java M java/engine/org/apache/derby/iapi/sql/dictionary/DataDictionary.java An extra argument was added to DataDictionary.getCurrentValueAndAdvance() so that the method can be used for both sequences and identity columns. Some obsolete methods were removed. ----------- M java/engine/org/apache/derby/iapi/reference/Property.java M java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java Added cache management for identity SequenceUpdaters. ----------- M java/engine/org/apache/derby/impl/sql/execute/InsertConstantAction.java Some unused methods were removed. The array of RowLocations was left untouched and is still constructed by InsertNode. Leaving this array intact avoids the need to change the serialized form of this ConstantAction. That eliminates soft-upgrade/soft-downgrade problems. ----------- M java/engine/org/apache/derby/impl/sql/execute/InsertResultSet.java Replaced the old identity management with calls to the SequenceUpdaters cached in the DataDictionary. ----------- M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AlterTableTest.java M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AutoIncrementTest.java The existing test needed some tweaking: 1) The preallocation of identity ranges changes the results of queries against SYSCOLUMNS. 2) Fewer locks are held now, changing the results of queries against the lock vti. 3) The biggest value in a BIGINT identity column used to be (Long.MAX_VALUE - 1). Now it is Long.MAX_VALUE, as it should be. I don't understand why a wrong result was canonized in AutoIncrementTest.
          Hide
          rhillegas Rick Hillegas added a comment -

          Committed derby-4437-01-aj-allTestsPass.diff to trunk at subversion revision 1135226.

          Show
          rhillegas Rick Hillegas added a comment - Committed derby-4437-01-aj-allTestsPass.diff to trunk at subversion revision 1135226.
          Hide
          rhillegas Rick Hillegas added a comment -

          Attaching derby-4437-02-ac-alterTable-bulkImport-deferredInsert.diff. This patch adds additional tests to verify that ALTER TABLE, import, and deferred INSERT work as spec'd with the new generator-based machinery for identity columns. Committed at subversion revision 1135754.

          Touches the following files:

          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AutoIncrementTest.java
          A java/testing/org/apache/derbyTesting/functionTests/tests/lang/t_4437_2.dat

          Show
          rhillegas Rick Hillegas added a comment - Attaching derby-4437-02-ac-alterTable-bulkImport-deferredInsert.diff. This patch adds additional tests to verify that ALTER TABLE, import, and deferred INSERT work as spec'd with the new generator-based machinery for identity columns. Committed at subversion revision 1135754. Touches the following files: M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AutoIncrementTest.java A java/testing/org/apache/derbyTesting/functionTests/tests/lang/t_4437_2.dat
          Hide
          rhillegas Rick Hillegas added a comment -

          Attaching derby-4437-03-aa-upgradeTest.diff. This patch adds an upgrade test case to verify that identity columns function correctly across upgrade and downgrade. Committed at subversion revision 1136036.

          Touches the following files:

          A java/testing/org/apache/derbyTesting/functionTests/tests/upgradeTests/Changes10_9.java
          M java/testing/org/apache/derbyTesting/functionTests/tests/upgradeTests/UpgradeRun.java

          Show
          rhillegas Rick Hillegas added a comment - Attaching derby-4437-03-aa-upgradeTest.diff. This patch adds an upgrade test case to verify that identity columns function correctly across upgrade and downgrade. Committed at subversion revision 1136036. Touches the following files: A java/testing/org/apache/derbyTesting/functionTests/tests/upgradeTests/Changes10_9.java M java/testing/org/apache/derbyTesting/functionTests/tests/upgradeTests/UpgradeRun.java
          Hide
          knutanders Knut Anders Hatlen added a comment -

          Thanks for working on this issue, Rick. I haven't looked at the code yet, but I wrote a small performance test (see the attached Java class D4437PerfTest.java) and ran an experiment on a Sun Fire T2000 machine with 32 cores.

          The test runs multi-threaded inserts, each thread has its own table to avoid lock/latch conflicts. I just now realized that one table per thread is probably not the ideal test, since the problem reported here actually was lock contention... I'll update the test and rerun it, but I'm posting the results from this first run anyways (see the graph in insertperf.png), as the results are quite interesting. Even in this test with no contention, head of trunk is able to insert rows almost twice as fast as 10.8.1.2 when the table has an identity column. Presumably this is because we don't need to access the system tables so often?

          Show
          knutanders Knut Anders Hatlen added a comment - Thanks for working on this issue, Rick. I haven't looked at the code yet, but I wrote a small performance test (see the attached Java class D4437PerfTest.java) and ran an experiment on a Sun Fire T2000 machine with 32 cores. The test runs multi-threaded inserts, each thread has its own table to avoid lock/latch conflicts. I just now realized that one table per thread is probably not the ideal test, since the problem reported here actually was lock contention... I'll update the test and rerun it, but I'm posting the results from this first run anyways (see the graph in insertperf.png), as the results are quite interesting. Even in this test with no contention, head of trunk is able to insert rows almost twice as fast as 10.8.1.2 when the table has an identity column. Presumably this is because we don't need to access the system tables so often?
          Hide
          rhillegas Rick Hillegas added a comment -

          Thanks for running that performance test and posting the graph, Knut. Your theory about the performance boost sounds good to me.

          Show
          rhillegas Rick Hillegas added a comment - Thanks for running that performance test and posting the graph, Knut. Your theory about the performance boost sounds good to me.
          Hide
          knutanders Knut Anders Hatlen added a comment -

          Here's another attempt on a performance test for this improvement. I modified the test to use a set of five tables, all with an identity column. Each thread inserts one row into each of the tables and then commits. This is closer to the scenario in which I saw this problem when I reported the issue. Since each transaction performs multiple inserts, escalating the locks on the system table from the nested transaction to the parent transaction will have a higher likelihood of causing contention than in the previous test which committed for every single insert. Also, since all threads work on the same set of tables, there should be more lock conflicts in the system table.

          This new graph (insertperf2.png) shows the results from the test. As expected, the difference between 10.8 and trunk is bigger than it was in the previous test, but not dramatically. With 10.8, Derby essentially only allows one thread to run at a time, so adding more threads doesn't increase the throughput. With trunk, the throughput reaches its maximum at three threads. That's a bit disappointing, given that the machine has 32 cores, but it might be hitting some other bottleneck, most likely disk I/O.

          For reference, I included results from running the same test without having an identity column in the tables, to see how well we could expect the test to scale if generating the identity values was eliminated completely as a bottleneck. That test maxed out around five threads, so only scaling up to three threads when we have identity columns doesn't sound unreasonable for this kind of load after all.

          I also experimented with the derby.language.identityGeneratorCacheSize property, but that didn't seem to have any effect on the results (I tried 10, 50, 100, as well as the default 32).

          Show
          knutanders Knut Anders Hatlen added a comment - Here's another attempt on a performance test for this improvement. I modified the test to use a set of five tables, all with an identity column. Each thread inserts one row into each of the tables and then commits. This is closer to the scenario in which I saw this problem when I reported the issue. Since each transaction performs multiple inserts, escalating the locks on the system table from the nested transaction to the parent transaction will have a higher likelihood of causing contention than in the previous test which committed for every single insert. Also, since all threads work on the same set of tables, there should be more lock conflicts in the system table. This new graph (insertperf2.png) shows the results from the test. As expected, the difference between 10.8 and trunk is bigger than it was in the previous test, but not dramatically. With 10.8, Derby essentially only allows one thread to run at a time, so adding more threads doesn't increase the throughput. With trunk, the throughput reaches its maximum at three threads. That's a bit disappointing, given that the machine has 32 cores, but it might be hitting some other bottleneck, most likely disk I/O. For reference, I included results from running the same test without having an identity column in the tables, to see how well we could expect the test to scale if generating the identity values was eliminated completely as a bottleneck. That test maxed out around five threads, so only scaling up to three threads when we have identity columns doesn't sound unreasonable for this kind of load after all. I also experimented with the derby.language.identityGeneratorCacheSize property, but that didn't seem to have any effect on the results (I tried 10, 50, 100, as well as the default 32).
          Hide
          rhillegas Rick Hillegas added a comment -

          Thanks for running those experiments and for the analysis, Knut. There is one other knob which might be adjusted: the size of the pre-allocated identity range. This is hardcoded as SequenceGenerator.DEFAULT_PREALLOCATION_COUNT. Changing that knob did not have much affect on the experiments I ran on sequence generators. Thanks.

          Show
          rhillegas Rick Hillegas added a comment - Thanks for running those experiments and for the analysis, Knut. There is one other knob which might be adjusted: the size of the pre-allocated identity range. This is hardcoded as SequenceGenerator.DEFAULT_PREALLOCATION_COUNT. Changing that knob did not have much affect on the experiments I ran on sequence generators. Thanks.
          Hide
          knutanders Knut Anders Hatlen added a comment -

          Thanks Rick. I played with that setting and found that increasing it had a good effect on the scalability. See the attached graph, prealloc.png. In this experiment, the scalability improved with increasing pre-allocation count up to 160, but doubling it to 320 didn't improve it further.

          Is there a downside with increasing this parameter?

          Show
          knutanders Knut Anders Hatlen added a comment - Thanks Rick. I played with that setting and found that increasing it had a good effect on the scalability. See the attached graph, prealloc.png. In this experiment, the scalability improved with increasing pre-allocation count up to 160, but doubling it to 320 didn't improve it further. Is there a downside with increasing this parameter?
          Hide
          rhillegas Rick Hillegas added a comment -

          Hi Knut. Those are impressive results. The only downside I see to increasing this parameter is that we would leak more unused values every time the database shuts down. If that worried us, we could try to flush the current sequence value to disk on shutdown. What are your thoughts?

          Show
          rhillegas Rick Hillegas added a comment - Hi Knut. Those are impressive results. The only downside I see to increasing this parameter is that we would leak more unused values every time the database shuts down. If that worried us, we could try to flush the current sequence value to disk on shutdown. What are your thoughts?
          Hide
          rhillegas Rick Hillegas added a comment -

          I think that we will leak unused values every time the caches are invalidated and thrown away. I think that happens when the user does DDL too.

          Show
          rhillegas Rick Hillegas added a comment - I think that we will leak unused values every time the caches are invalidated and thrown away. I think that happens when the user does DDL too.
          Hide
          rhillegas Rick Hillegas added a comment -

          People may want to configure the size of the preallocated ranges for sequences (see DERBY-5151). Being able to set the preallocation size to 1 will give people the power to eliminate holes in sequences that occur when you shut down the database and throw away the unused part of the preallocated range. That in turn, will give people a workaround if they can't tolerate the holes introduced by the discarded ranges. It may also be useful to tune the size of the preallocated range depending on how many processors a machine has.

          To let people configure the size of preallocated ranges, I propose that we introduce a new family of Derby properties:

          derby.sequence.cache.size.$UUID=$number

          where

          $UUID is the uuid of a sequence or the uuid of a table with an identity column

          $number is a non-negative number

          If this property is not specified, it defaults to a hardcoded number. Currently that number is 5, but it could be 160 (see Knut's experiments). Maybe the default can be some function of the number of processors on the machine (if we can figure that out).

          The property will be retrieved by PropertyUtil.getServiceProperty() when the generator is created. This will give it the following behaviors:

          1) It can be set at the system, database, and derby.properties levels.

          2) It is semi-static. That is, it won't change on the fly if you update the system or database properties. However, if you change the property and then do something which throws away the cache, then the new value of the property will be used when the system recreates the cache. The cache is thrown away at database shutdown and when DDL is run.

          Show
          rhillegas Rick Hillegas added a comment - People may want to configure the size of the preallocated ranges for sequences (see DERBY-5151 ). Being able to set the preallocation size to 1 will give people the power to eliminate holes in sequences that occur when you shut down the database and throw away the unused part of the preallocated range. That in turn, will give people a workaround if they can't tolerate the holes introduced by the discarded ranges. It may also be useful to tune the size of the preallocated range depending on how many processors a machine has. To let people configure the size of preallocated ranges, I propose that we introduce a new family of Derby properties: derby.sequence.cache.size.$UUID=$number where $UUID is the uuid of a sequence or the uuid of a table with an identity column $number is a non-negative number If this property is not specified, it defaults to a hardcoded number. Currently that number is 5, but it could be 160 (see Knut's experiments). Maybe the default can be some function of the number of processors on the machine (if we can figure that out). The property will be retrieved by PropertyUtil.getServiceProperty() when the generator is created. This will give it the following behaviors: 1) It can be set at the system, database, and derby.properties levels. 2) It is semi-static. That is, it won't change on the fly if you update the system or database properties. However, if you change the property and then do something which throws away the cache, then the new value of the property will be used when the system recreates the cache. The cache is thrown away at database shutdown and when DDL is run.
          Hide
          knutanders Knut Anders Hatlen added a comment -

          So if we could write the sequence value to disk on eviction from the sequence cache and on shutdown, we'd only leak values on an unclean shutdown/crash, right? (And on transaction rollback, but that could happen even before these changes.) The sequence caches are implemented using the generic cache manager, so it shouldn't be too difficult to implement it, since writing to disk on eviction and shutdown is exactly what the page cache and container cache do. Leaking a bigger chunk of values on unclean shutdown sounds acceptable to me, since applications will have to be prepared for holes in any case.

          Show
          knutanders Knut Anders Hatlen added a comment - So if we could write the sequence value to disk on eviction from the sequence cache and on shutdown, we'd only leak values on an unclean shutdown/crash, right? (And on transaction rollback, but that could happen even before these changes.) The sequence caches are implemented using the generic cache manager, so it shouldn't be too difficult to implement it, since writing to disk on eviction and shutdown is exactly what the page cache and container cache do. Leaking a bigger chunk of values on unclean shutdown sounds acceptable to me, since applications will have to be prepared for holes in any case.
          Hide
          rhillegas Rick Hillegas added a comment -

          Hi Knut. Your conclusions sound correct to me. Thanks.

          Show
          rhillegas Rick Hillegas added a comment - Hi Knut. Your conclusions sound correct to me. Thanks.
          Hide
          mikem Mike Matrigali added a comment -

          Another option that might fit zero admin better is to have the system dynamically size the cache. Making it bigger as it notices
          that the cache is getting used up in a "short" amount of time and/or sizing it back down as activity slows. I'd rather see something like this than have the users tuning the cache size bigger.

          Figuring out how to leak less by doing work on eviction/shutdown seems like a good idea, if it does not have too big a performance
          impact. On shutdown does not seem like a problem to me. I don't have a good feel on how often the cache eviction case happens.

          Show
          mikem Mike Matrigali added a comment - Another option that might fit zero admin better is to have the system dynamically size the cache. Making it bigger as it notices that the cache is getting used up in a "short" amount of time and/or sizing it back down as activity slows. I'd rather see something like this than have the users tuning the cache size bigger. Figuring out how to leak less by doing work on eviction/shutdown seems like a good idea, if it does not have too big a performance impact. On shutdown does not seem like a problem to me. I don't have a good feel on how often the cache eviction case happens.
          Hide
          mikem Mike Matrigali added a comment -

          From the tests can you come up with what the current overhead is for allocating a chunk of sequence numbers on whatever hardware you are testing on?

          Show
          mikem Mike Matrigali added a comment - From the tests can you come up with what the current overhead is for allocating a chunk of sequence numbers on whatever hardware you are testing on?
          Hide
          knutanders Knut Anders Hatlen added a comment -

          One way to estimate the overhead of allocating a chunk of sequence numbers is to look at the numbers in the single-threaded case. When allocating a chunk per insert (what we do in 10.8), there are ~11000 transactions. When allocating a chunk every five insert (current trunk), the number is ~20000. As the frequency of allocations approaches 0 (1/160 and 1/320), the number of transactions seems to stabilize around 27000. So it would seem that the allocation alone is more expensive than the rest of the insert operation.

          Show
          knutanders Knut Anders Hatlen added a comment - One way to estimate the overhead of allocating a chunk of sequence numbers is to look at the numbers in the single-threaded case. When allocating a chunk per insert (what we do in 10.8), there are ~11000 transactions. When allocating a chunk every five insert (current trunk), the number is ~20000. As the frequency of allocations approaches 0 (1/160 and 1/320), the number of transactions seems to stabilize around 27000. So it would seem that the allocation alone is more expensive than the rest of the insert operation.
          Hide
          rhillegas Rick Hillegas added a comment -

          Attaching derby-4437-04-aa-reclaimUnusedValuesOnShutdown.diff. This patch makes orderly shutdown reclaim unused, pre-allocated ranges of sequence/identity numbers. Tests passed cleanly for me.

          My first attempt to code this involved putting the reclamation call in DataDictionaryImpl.stop(). That turned out to be too late during orderly shutdown--by that time the LCC could not be found. This is my second attempt. The reclamation call is now in BasicDatabase.stop().

          I also noticed that the DataDictionary was already reclaiming unused ranges when DDL invalidated the caches. So we should not be leaking sequence numbers when we perform DDL.

          Nothing is done about unorderly shutdown. If the application fails to shutdown the database before exiting, then the engine will still leak unused sequence/identity numbers.

          Touches the following files:

          --------------

          M java/engine/org/apache/derby/iapi/sql/dictionary/DataDictionary.java
          M java/storeless/org/apache/derby/impl/storeless/EmptyDictionary.java
          M java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java

          Adds a new DataDictionary method: clearSequenceCaches(). This method makes the cached sequence/identity generators reclaim unused, pre-allocated values.

          --------------

          M java/engine/org/apache/derby/impl/db/BasicDatabase.java

          During orderly shutdown, the Database module now calls the new DataDictionary method.

          --------------

          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/SequenceGeneratorTest.java

          Updates a test case to check that values don't leak during orderly shutdown.

          Show
          rhillegas Rick Hillegas added a comment - Attaching derby-4437-04-aa-reclaimUnusedValuesOnShutdown.diff. This patch makes orderly shutdown reclaim unused, pre-allocated ranges of sequence/identity numbers. Tests passed cleanly for me. My first attempt to code this involved putting the reclamation call in DataDictionaryImpl.stop(). That turned out to be too late during orderly shutdown--by that time the LCC could not be found. This is my second attempt. The reclamation call is now in BasicDatabase.stop(). I also noticed that the DataDictionary was already reclaiming unused ranges when DDL invalidated the caches. So we should not be leaking sequence numbers when we perform DDL. Nothing is done about unorderly shutdown. If the application fails to shutdown the database before exiting, then the engine will still leak unused sequence/identity numbers. Touches the following files: -------------- M java/engine/org/apache/derby/iapi/sql/dictionary/DataDictionary.java M java/storeless/org/apache/derby/impl/storeless/EmptyDictionary.java M java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java Adds a new DataDictionary method: clearSequenceCaches(). This method makes the cached sequence/identity generators reclaim unused, pre-allocated values. -------------- M java/engine/org/apache/derby/impl/db/BasicDatabase.java During orderly shutdown, the Database module now calls the new DataDictionary method. -------------- M java/testing/org/apache/derbyTesting/functionTests/tests/lang/SequenceGeneratorTest.java Updates a test case to check that values don't leak during orderly shutdown.
          Hide
          rhillegas Rick Hillegas added a comment -

          Committed derby-4437-04-aa-reclaimUnusedValuesOnShutdown.diff at subversion revision 1137985.

          Show
          rhillegas Rick Hillegas added a comment - Committed derby-4437-04-aa-reclaimUnusedValuesOnShutdown.diff at subversion revision 1137985.
          Hide
          rhillegas Rick Hillegas added a comment -

          With the previous checkin, the behavior of this improvement has changed. Now Derby no longer leaks unused sequence/identity values--provided that the database is shutdown in an orderly fashion before the VM exits. However, holes will still appear in sequences and identity columns if you don't park your databases before the VM exits.

          Is the behavioral change narrow enough now that we think this work can be backported to the 10.8 branch?

          Thanks,
          -Rick

          Show
          rhillegas Rick Hillegas added a comment - With the previous checkin, the behavior of this improvement has changed. Now Derby no longer leaks unused sequence/identity values--provided that the database is shutdown in an orderly fashion before the VM exits. However, holes will still appear in sequences and identity columns if you don't park your databases before the VM exits. Is the behavioral change narrow enough now that we think this work can be backported to the 10.8 branch? Thanks, -Rick
          Hide
          rhillegas Rick Hillegas added a comment -

          Attaching derby-4437-05-aa-pluggablePreallocation.diff. This patch implements pluggable allocators for sequence/identity ranges so that customers can override Derby's default logic for determining how long the pre-allocated ranges should be. I am running tests now.

          This patch does the following:

          1) Introduces a new public api interface: SequencePreallocator. Customers can customize how they want to pre-allocate sequence/identity ranges by implementing their own SequencePreallocator and then pointing the following new Derby property at it. The property can be set at the system, database, and derby.properties levels.

          -Dderby.language.sequence.preallocator=MyRangeAllocator

          2) Supplies a default implementation of SequencePreallocator. For this first increment, the default implementation just specifies the range size used in 10.8 (5 values).

          In a follow-on patch, I will recode the default SequencePreallocator to implement what Mike suggested: The size of the range will keep growing until it reaches the limit that the application can handle. Over time the range may shrink again if the application needs fewer values. Hopefully, this will be good enough for a scalable out-of-the-box experience.

          Customers can write their own SequencePreallocators to do the following:

          1) Set the preallocation value to 1. This eliminates the leaking of preallocated values when the VM exits gracelessly--at the cost of losing the extra concurrency addressed by this JIRA.

          2) Set the preallocation value to some other, larger, hardcoded value.

          3) Optimize preallocation to handle spikes: don't ever shrink the size of the range, just grow it as necessary.

          Touches the following files:

          ---------------

          M java/engine/org/apache/derby/iapi/reference/Property.java
          A java/engine/org/apache/derby/catalog/SequencePreallocator.java
          M tools/javadoc/publishedapi.ant

          New property and the interface which customers can implement in order to control how Derby pre-allocates ranges.

          ---------------

          M java/engine/org/apache/derby/loc/messages.xml
          M java/shared/org/apache/derby/shared/common/reference/SQLState.java

          New and changed messages.

          ---------------

          M java/engine/org/apache/derby/impl/sql/catalog/SequenceGenerator.java
          A java/engine/org/apache/derby/impl/sql/catalog/SequenceRange.java
          M java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java

          Replaces the old hard-coded range allocation with the new pluggable scheme.

          ---------------

          M java/storeless/org/apache/derby/impl/storeless/EmptyDictionary.java

          Corrects a typo here.

          ---------------

          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/SequenceGeneratorTest.java

          New tests to verify the behavior of custom SequencePreallocators.

          Show
          rhillegas Rick Hillegas added a comment - Attaching derby-4437-05-aa-pluggablePreallocation.diff. This patch implements pluggable allocators for sequence/identity ranges so that customers can override Derby's default logic for determining how long the pre-allocated ranges should be. I am running tests now. This patch does the following: 1) Introduces a new public api interface: SequencePreallocator. Customers can customize how they want to pre-allocate sequence/identity ranges by implementing their own SequencePreallocator and then pointing the following new Derby property at it. The property can be set at the system, database, and derby.properties levels. -Dderby.language.sequence.preallocator=MyRangeAllocator 2) Supplies a default implementation of SequencePreallocator. For this first increment, the default implementation just specifies the range size used in 10.8 (5 values). In a follow-on patch, I will recode the default SequencePreallocator to implement what Mike suggested: The size of the range will keep growing until it reaches the limit that the application can handle. Over time the range may shrink again if the application needs fewer values. Hopefully, this will be good enough for a scalable out-of-the-box experience. Customers can write their own SequencePreallocators to do the following: 1) Set the preallocation value to 1. This eliminates the leaking of preallocated values when the VM exits gracelessly--at the cost of losing the extra concurrency addressed by this JIRA. 2) Set the preallocation value to some other, larger, hardcoded value. 3) Optimize preallocation to handle spikes: don't ever shrink the size of the range, just grow it as necessary. Touches the following files: --------------- M java/engine/org/apache/derby/iapi/reference/Property.java A java/engine/org/apache/derby/catalog/SequencePreallocator.java M tools/javadoc/publishedapi.ant New property and the interface which customers can implement in order to control how Derby pre-allocates ranges. --------------- M java/engine/org/apache/derby/loc/messages.xml M java/shared/org/apache/derby/shared/common/reference/SQLState.java New and changed messages. --------------- M java/engine/org/apache/derby/impl/sql/catalog/SequenceGenerator.java A java/engine/org/apache/derby/impl/sql/catalog/SequenceRange.java M java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java Replaces the old hard-coded range allocation with the new pluggable scheme. --------------- M java/storeless/org/apache/derby/impl/storeless/EmptyDictionary.java Corrects a typo here. --------------- M java/testing/org/apache/derbyTesting/functionTests/tests/lang/SequenceGeneratorTest.java New tests to verify the behavior of custom SequencePreallocators.
          Hide
          rhillegas Rick Hillegas added a comment -

          Tests passed cleanly for me against derby-4437-05-aa-pluggablePreallocation.diff except for two heisenbugs which I see on Mac OSX from time to time: testPing and testInvalidLDAPServerConnectionError (see derby-5110 and derby-4869). Committed at subversion revision 1138434.

          Show
          rhillegas Rick Hillegas added a comment - Tests passed cleanly for me against derby-4437-05-aa-pluggablePreallocation.diff except for two heisenbugs which I see on Mac OSX from time to time: testPing and testInvalidLDAPServerConnectionError (see derby-5110 and derby-4869). Committed at subversion revision 1138434.
          Hide
          kmarsden Kathey Marsden added a comment -

          In general I don't think people shut down on exiting the vm (although I wish they would). So what we are offering is change in behavior but a workaround for it which at the same time encourages good Derby practices. That is tempting but I have heard quite a few embedded products say that they do not have control of the vm at exit time, so can't shutdown. Also I think we need to consider surprises that comes up on upgrade in the field we need a non-code solution, so I think we should backport and leave it on by default, but add a property to turn it off. The more conservative option might be to have it it off by default on 10.8, but I think the fix is valuable enough to keep it on and take the risk.

          Show
          kmarsden Kathey Marsden added a comment - In general I don't think people shut down on exiting the vm (although I wish they would). So what we are offering is change in behavior but a workaround for it which at the same time encourages good Derby practices. That is tempting but I have heard quite a few embedded products say that they do not have control of the vm at exit time, so can't shutdown. Also I think we need to consider surprises that comes up on upgrade in the field we need a non-code solution, so I think we should backport and leave it on by default, but add a property to turn it off. The more conservative option might be to have it it off by default on 10.8, but I think the fix is valuable enough to keep it on and take the risk.
          Hide
          mikem Mike Matrigali added a comment -

          I would be ok if someone backported this to 10.8. As I understand it the current performance is at a level for some applications which I would
          consider making the feature unusable without the fix. It may increase gaps, but we should continue to document that the feature in no way
          guarantees no gaps. Any application relying on no gaps is a bug waiting to happen whether we backport this change or not. In general I don't
          like seeing improvements backported, but performance issues are sometimes ok with me when they cause big enough problems.

          For me I lean toward the behavior change backport being ok for 10.8 as I would guess there are not a lot of applications yet out there on this
          release. The upcoming release will be available to all users as a supported apache release. I don't think I would backport it farther than
          10.8.

          Show
          mikem Mike Matrigali added a comment - I would be ok if someone backported this to 10.8. As I understand it the current performance is at a level for some applications which I would consider making the feature unusable without the fix. It may increase gaps, but we should continue to document that the feature in no way guarantees no gaps. Any application relying on no gaps is a bug waiting to happen whether we backport this change or not. In general I don't like seeing improvements backported, but performance issues are sometimes ok with me when they cause big enough problems. For me I lean toward the behavior change backport being ok for 10.8 as I would guess there are not a lot of applications yet out there on this release. The upcoming release will be available to all users as a supported apache release. I don't think I would backport it farther than 10.8.
          Hide
          rhillegas Rick Hillegas added a comment -

          Attaching a couple files:

          1) derby-4437-06-aa-selfTuning - This is an experimental patch, not intended for commit. This patch adds a crude heuristic to the default range preallocator. The heuristic attempts to tune the size of the preallocation range based on the rate at which identity values are being requested.

          2) Experiments_4437.html - This is a webpage of results from some experiments which I ran, measuring the throughput of Knut's experiment with various hardcoded range lengths and with the crude heuristic.

          Based on my experiments, I believe that I can offer the following modest conclusions:

          i) I don't know how to write useful self-tuning logic which will accomplish what Mike wants. This feels like a research project to me. Someone else may want to pick up this project but I do not feel I can spend any more time on it.

          ii) Derby is able to keep boosting the throughput as you keep boosting the size of the preallocated range. Derby will keep delivering better throughput as you boost the size of that range well past your tolerance for leaked values.

          iii) I can't offer the customer anything better than a knob which declares how many values the app is willing to leak.

          I can do the following additional work on this issue. Let me know if you think I should do this work:

          A) Add a knob so that apps can tune the size of the default preallocated range.

          B) Change the current default range size of 5 to some other number. If you think this is useful, let me know what a better number would be.

          Devising self-tuning logic sounds like an interesting project but one which should happen under another JIRA.

          Show
          rhillegas Rick Hillegas added a comment - Attaching a couple files: 1) derby-4437-06-aa-selfTuning - This is an experimental patch, not intended for commit. This patch adds a crude heuristic to the default range preallocator. The heuristic attempts to tune the size of the preallocation range based on the rate at which identity values are being requested. 2) Experiments_4437.html - This is a webpage of results from some experiments which I ran, measuring the throughput of Knut's experiment with various hardcoded range lengths and with the crude heuristic. Based on my experiments, I believe that I can offer the following modest conclusions: i) I don't know how to write useful self-tuning logic which will accomplish what Mike wants. This feels like a research project to me. Someone else may want to pick up this project but I do not feel I can spend any more time on it. ii) Derby is able to keep boosting the throughput as you keep boosting the size of the preallocated range. Derby will keep delivering better throughput as you boost the size of that range well past your tolerance for leaked values. iii) I can't offer the customer anything better than a knob which declares how many values the app is willing to leak. I can do the following additional work on this issue. Let me know if you think I should do this work: A) Add a knob so that apps can tune the size of the default preallocated range. B) Change the current default range size of 5 to some other number. If you think this is useful, let me know what a better number would be. Devising self-tuning logic sounds like an interesting project but one which should happen under another JIRA.
          Hide
          mikem Mike Matrigali added a comment - - edited

          Here are my thoughts. The work so far looks great and I would be fine seeing it checked in as is and backported. I think it is
          very reasonable to log a JIRA for the tunable aspect and hope someone is interested in that work. I actually think that is better as I would
          rather see the tunable aspect go into only trunk and a new release rather than backported as a bug fix - since as you point out it is experimental.
          It would be great if we could have a discussion there on possible ideas for an algorithm there. I actually think the project could be done by
          a newcomer as the coding would be very localized once you check in, the hard part is just coding the tuning stuff in one place. Might have
          to do some new work to have interesting inputs to the algorithm.

          It may also be worth logging a separate improvement JIRA if one does not exist to solve the lost range on JIRA crash. This is not a simple problem, and may
          not be worth effort but might as well create a place holder. It would somehow probably need new log records and specialized recovery of those during crash
          recovery. It is complicated in that it is logical work that needs to be done above store but currently only store work is done during crash recovery. Another
          option would be some brute force work to actually scan the tables on recovery and find the "highest" and reset the range before an application came in. If
          there are indexes on these things this becomes much faster. Some of this work is similar to the problem we have with post commit space reclamation
          work that is also lost when a crash happens.

          The following would be my votes on the changes you mentioned you are willing to do, but would not argue strongly with opposing views:

          o if a knob is added so a user could backtrack if they did not like losing ranges, then I would bump the default to something like 20, given
          knut's experiments and rick's notes on other databases defaults. I would backport this change to 10.8, again only if knob was included.
          Given how fast processors are nowadays adding
          a commit I/O for every 5 inserts seems a high price, when the system is doing 900 inserts a second.

          o If no knob I think I would leave default of 5 for backport to 10.8 and bump default to 20 in trunk. We could give notice in 10.8 release that upcoming
          10.9 would bump this default.

          o Using the properties seems like complicated syntax to me. I assume using "alter table" is not possible as there is no standard in this area. What would people
          think about using a system procedure instead of a property. That way the call could simply take the standard application and table name arguments, and
          would require the usual alter table database permissions to set. Underneath the procedure call it should just call an internal alter table call.

          Show
          mikem Mike Matrigali added a comment - - edited Here are my thoughts. The work so far looks great and I would be fine seeing it checked in as is and backported. I think it is very reasonable to log a JIRA for the tunable aspect and hope someone is interested in that work. I actually think that is better as I would rather see the tunable aspect go into only trunk and a new release rather than backported as a bug fix - since as you point out it is experimental. It would be great if we could have a discussion there on possible ideas for an algorithm there. I actually think the project could be done by a newcomer as the coding would be very localized once you check in, the hard part is just coding the tuning stuff in one place. Might have to do some new work to have interesting inputs to the algorithm. It may also be worth logging a separate improvement JIRA if one does not exist to solve the lost range on JIRA crash. This is not a simple problem, and may not be worth effort but might as well create a place holder. It would somehow probably need new log records and specialized recovery of those during crash recovery. It is complicated in that it is logical work that needs to be done above store but currently only store work is done during crash recovery. Another option would be some brute force work to actually scan the tables on recovery and find the "highest" and reset the range before an application came in. If there are indexes on these things this becomes much faster. Some of this work is similar to the problem we have with post commit space reclamation work that is also lost when a crash happens. The following would be my votes on the changes you mentioned you are willing to do, but would not argue strongly with opposing views: o if a knob is added so a user could backtrack if they did not like losing ranges, then I would bump the default to something like 20, given knut's experiments and rick's notes on other databases defaults. I would backport this change to 10.8, again only if knob was included. Given how fast processors are nowadays adding a commit I/O for every 5 inserts seems a high price, when the system is doing 900 inserts a second. o If no knob I think I would leave default of 5 for backport to 10.8 and bump default to 20 in trunk. We could give notice in 10.8 release that upcoming 10.9 would bump this default. o Using the properties seems like complicated syntax to me. I assume using "alter table" is not possible as there is no standard in this area. What would people think about using a system procedure instead of a property. That way the call could simply take the standard application and table name arguments, and would require the usual alter table database permissions to set. Underneath the procedure call it should just call an internal alter table call.
          Hide
          rhillegas Rick Hillegas added a comment -

          For the record, here are the default lengths of the preallocated ranges for sequences in some other databases. These are the maximum number of values which are leaked if the database crashes:

          Oracle: 20
          Oracle RDB: 20
          DB2: 20
          Postgres: 1

          As discussed on DERBY-5151, there is no SQL Standard language for tuning the size of these ranges although the various non-standard approaches are all pretty similar.

          Show
          rhillegas Rick Hillegas added a comment - For the record, here are the default lengths of the preallocated ranges for sequences in some other databases. These are the maximum number of values which are leaked if the database crashes: Oracle: 20 Oracle RDB: 20 DB2: 20 Postgres: 1 As discussed on DERBY-5151 , there is no SQL Standard language for tuning the size of these ranges although the various non-standard approaches are all pretty similar.
          Hide
          rhillegas Rick Hillegas added a comment -

          Thanks for the quick response, Mike. A couple comments:

          o I have changed the title of DERBY-5151 to indicate that it covers the issue of leaking identity values on abnormal exit.

          o Concerning the knob: A previous checkin introduced the following new Derby property. Currently, it can be set to the name of a class which provides custom preallocation logic. The custom preallocator can give you different range sizes per sequence/identity. We could also let the property be set to a number. If set to a number, then that would be the size of the preallocation range and it would apply to all sequences and identity columns:

          derby.language.sequence.preallocator

          o I agree that we should not introduce an additional property per sequence/identity.

          o Additional, non-standard SQL language would be acceptable to me. Other databases handle this issue with very simliar language--the differences seem very slight to me. See DERBY-5151. With a little patience, I think we could agree on some almost standard language. A nice feature of the language-based approach is that dblook would reconstruct the knob settings.

          o A database procedure would work too. However, the knob settings would be lost when you exported/imported the database. This defect also affects the currently implemented workaround.

          o I don't want to put any effort into the procedure or the SQL language approaches at this time. But someone else is welcome to pick this up.

          Show
          rhillegas Rick Hillegas added a comment - Thanks for the quick response, Mike. A couple comments: o I have changed the title of DERBY-5151 to indicate that it covers the issue of leaking identity values on abnormal exit. o Concerning the knob: A previous checkin introduced the following new Derby property. Currently, it can be set to the name of a class which provides custom preallocation logic. The custom preallocator can give you different range sizes per sequence/identity. We could also let the property be set to a number. If set to a number, then that would be the size of the preallocation range and it would apply to all sequences and identity columns: derby.language.sequence.preallocator o I agree that we should not introduce an additional property per sequence/identity. o Additional, non-standard SQL language would be acceptable to me. Other databases handle this issue with very simliar language--the differences seem very slight to me. See DERBY-5151 . With a little patience, I think we could agree on some almost standard language. A nice feature of the language-based approach is that dblook would reconstruct the knob settings. o A database procedure would work too. However, the knob settings would be lost when you exported/imported the database. This defect also affects the currently implemented workaround. o I don't want to put any effort into the procedure or the SQL language approaches at this time. But someone else is welcome to pick this up.
          Hide
          mikem Mike Matrigali added a comment -

          I would like to get a community consensus on the "knob" issue before this issue is backported to 10.8.

          Show
          mikem Mike Matrigali added a comment - I would like to get a community consensus on the "knob" issue before this issue is backported to 10.8.
          Hide
          kmarsden Kathey Marsden added a comment -

          Regarding the knob I think derby.language.sequence.preallocator taking a number that is the size of the range would be fine. I am not totally clear if this is something that exists already or is a suggestion for a solution.

          Show
          kmarsden Kathey Marsden added a comment - Regarding the knob I think derby.language.sequence.preallocator taking a number that is the size of the range would be fine. I am not totally clear if this is something that exists already or is a suggestion for a solution.
          Hide
          rhillegas Rick Hillegas added a comment -

          Hi Kathey,

          Right now, derby.language.sequence.preallocator can be set to the name of a class which customizes the preallocation behavior of a sequence/identity. It would require a small amount of extra work to let this property also be set to a number. Thanks.

          Show
          rhillegas Rick Hillegas added a comment - Hi Kathey, Right now, derby.language.sequence.preallocator can be set to the name of a class which customizes the preallocation behavior of a sequence/identity. It would require a small amount of extra work to let this property also be set to a number. Thanks.
          Hide
          rhillegas Rick Hillegas added a comment -

          No further opinions have surfaced. I propose to make the following changes:

          o Increase the size of the default preallocated range from 5 to 20 to match the behavior of other databases.

          o Change the default preallocator so that setting derby.language.sequence.preallocator to a positive integer will cause that number to be the size of the default preallocated ranges for both sequences and identity columns.

          o Backport the accumulated work on this issue to 10.8.

          o Supply a release note which describes the following behavior changes for 10.8.2:

          1) Throughput/concurrency of applications with identity columns should increase.

          2) Sequences and identity columns will leak up to 20 unused values apiece on abnormal shutdown.

          3) Applications can plug the leaks by performing orderly database shutdowns.

          4) Applications can revert to the old identity behavior by setting derby.language.sequence.preallocator=1. This will also reduce the concurrency of sequences.

          5) Applications can change the concurrency and leakage behavior of all sequences and identities by setting derby.language.sequence.preallocator to some other positive number.

          6) Applications can further customize the concurrency (and leakage size) of individual sequences and identities by setting derby.language.sequence.preallocator equal to the name of a user-written class which implements org.apache.derby.catalog.SequencePreallocator.

          If this does not sound like a good plan, please let me know. Thanks.

          Show
          rhillegas Rick Hillegas added a comment - No further opinions have surfaced. I propose to make the following changes: o Increase the size of the default preallocated range from 5 to 20 to match the behavior of other databases. o Change the default preallocator so that setting derby.language.sequence.preallocator to a positive integer will cause that number to be the size of the default preallocated ranges for both sequences and identity columns. o Backport the accumulated work on this issue to 10.8. o Supply a release note which describes the following behavior changes for 10.8.2: 1) Throughput/concurrency of applications with identity columns should increase. 2) Sequences and identity columns will leak up to 20 unused values apiece on abnormal shutdown. 3) Applications can plug the leaks by performing orderly database shutdowns. 4) Applications can revert to the old identity behavior by setting derby.language.sequence.preallocator=1. This will also reduce the concurrency of sequences. 5) Applications can change the concurrency and leakage behavior of all sequences and identities by setting derby.language.sequence.preallocator to some other positive number. 6) Applications can further customize the concurrency (and leakage size) of individual sequences and identities by setting derby.language.sequence.preallocator equal to the name of a user-written class which implements org.apache.derby.catalog.SequencePreallocator. If this does not sound like a good plan, please let me know. Thanks.
          Hide
          kmarsden Kathey Marsden added a comment -

          That sounds great to me. Thanks Rick for taking such care with the backport.

          Show
          kmarsden Kathey Marsden added a comment - That sounds great to me. Thanks Rick for taking such care with the backport.
          Hide
          knutanders Knut Anders Hatlen added a comment -

          +1 Sounds like a good plan to me too.

          Show
          knutanders Knut Anders Hatlen added a comment - +1 Sounds like a good plan to me too.
          Hide
          dagw Dag H. Wanvik added a comment -

          +1 from me, too.

          Show
          dagw Dag H. Wanvik added a comment - +1 from me, too.
          Hide
          rhillegas Rick Hillegas added a comment -

          Attaching derby-4437-07-ac-biggerDefault_propertyCanBeInteger.diff. This patch boosts the default preallocation size from 5 to 20. This patch also allows derby.language.sequence.preallocator to be set to an integer, which then becomes the default preallocation size. Regression tests passed cleanly for me on a previous version of this patch, except for known Heisenbugs and for a diff in AlterTableTest, which is corrected in this version of the patch.

          Touches the following files:

          --------

          M java/engine/org/apache/derby/impl/sql/catalog/SequenceRange.java
          M java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java

          The changes described above.

          --------

          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AlterTableTest.java
          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AutoIncrementTest.java

          Fixed to account for the new preallocation default.

          --------

          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/SequenceGeneratorTest.java

          New test case for setting derby.language.sequence.preallocator to be an integer.

          Show
          rhillegas Rick Hillegas added a comment - Attaching derby-4437-07-ac-biggerDefault_propertyCanBeInteger.diff. This patch boosts the default preallocation size from 5 to 20. This patch also allows derby.language.sequence.preallocator to be set to an integer, which then becomes the default preallocation size. Regression tests passed cleanly for me on a previous version of this patch, except for known Heisenbugs and for a diff in AlterTableTest, which is corrected in this version of the patch. Touches the following files: -------- M java/engine/org/apache/derby/impl/sql/catalog/SequenceRange.java M java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java The changes described above. -------- M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AlterTableTest.java M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AutoIncrementTest.java Fixed to account for the new preallocation default. -------- M java/testing/org/apache/derbyTesting/functionTests/tests/lang/SequenceGeneratorTest.java New test case for setting derby.language.sequence.preallocator to be an integer.
          Hide
          rhillegas Rick Hillegas added a comment -

          Attaching a release note for this issue.

          Show
          rhillegas Rick Hillegas added a comment - Attaching a release note for this issue.
          Hide
          knutanders Knut Anders Hatlen added a comment -

          The release note looks good. Thanks, Rick!

          The patch looks fine too. A couple of nits:

          • It would be good to add a couple of line breaks to the for loop in the isNumber() method to improve readability. Or perhaps just remove the method altogether and change the logic in makePreallocator() to:

          try

          { return new SequenceRange(Integer.parseInt(className)); }

          catch (NumberFormatException nfe)

          { return (SequencePreallocator) Class.forName(className).newInstance(); }

          ?

          • I didn't quite understand this change:

          + boolean caughtException = true;
          try

          { updater.getCurrentValueAndAdvance(); - fail( "Expected to catch cycle exception." ); + caughtException = false; }

          catch (Exception e)
          {}
          + if ( !caughtException )
          +

          { + fail( "Expected to catch cycle exception." ); + }

          I'm not able to find out what's changed here (except that the original code looked more concise). Did I miss something?

          Show
          knutanders Knut Anders Hatlen added a comment - The release note looks good. Thanks, Rick! The patch looks fine too. A couple of nits: It would be good to add a couple of line breaks to the for loop in the isNumber() method to improve readability. Or perhaps just remove the method altogether and change the logic in makePreallocator() to: try { return new SequenceRange(Integer.parseInt(className)); } catch (NumberFormatException nfe) { return (SequencePreallocator) Class.forName(className).newInstance(); } ? I didn't quite understand this change: + boolean caughtException = true; try { updater.getCurrentValueAndAdvance(); - fail( "Expected to catch cycle exception." ); + caughtException = false; } catch (Exception e) {} + if ( !caughtException ) + { + fail( "Expected to catch cycle exception." ); + } I'm not able to find out what's changed here (except that the original code looked more concise). Did I miss something?
          Hide
          rhillegas Rick Hillegas added a comment -

          Thanks for the quick review, Knut. I am attaching derby-4437-07-ad-biggerDefault_propertyCanBeInteger.diff, a new version which addresses your comments:

          1) I added some newlines to SequenceUpdater.isNumber() to improve readability.

          2) I reverted the confusing changes to SequenceGeneratorTest.vetBump(). They were cruft left over from an experiment to figure out why the revised test took so long to run.

          Show
          rhillegas Rick Hillegas added a comment - Thanks for the quick review, Knut. I am attaching derby-4437-07-ad-biggerDefault_propertyCanBeInteger.diff, a new version which addresses your comments: 1) I added some newlines to SequenceUpdater.isNumber() to improve readability. 2) I reverted the confusing changes to SequenceGeneratorTest.vetBump(). They were cruft left over from an experiment to figure out why the revised test took so long to run.
          Hide
          knutanders Knut Anders Hatlen added a comment -

          Thanks. The new patch looks good. +1

          Show
          knutanders Knut Anders Hatlen added a comment - Thanks. The new patch looks good. +1
          Hide
          rhillegas Rick Hillegas added a comment -

          Thanks, Knut. Committed derby-4437-07-ad-biggerDefault_propertyCanBeInteger.diff at subversion revision 1141567.

          Show
          rhillegas Rick Hillegas added a comment - Thanks, Knut. Committed derby-4437-07-ad-biggerDefault_propertyCanBeInteger.diff at subversion revision 1141567.
          Hide
          rhillegas Rick Hillegas added a comment -

          Backported the following patches from trunk to the 10.8 branch. Tests passed cleanly for me. Committed to 10.8 branch at subversion revision 1141645:

          1135226 derby-4437-01-aj-allTestsPass.diff
          1135754 derby-4437-02-ac-alterTable-bulkImport-deferredInsert.diff
          1137985 derby-4437-04-aa-reclaimUnusedValuesOnShutdown.diff
          1138434 derby-4437-05-aa-pluggablePreallocation.diff
          1141567 derby-4437-07-ad-biggerDefault_propertyCanBeInteger.diff

          The following patches were NOT backported:

          1136036 derby-4437-03-aa-upgradeTest.diff (10.9-specific upgrade test)
          derby-4437-06-aa-selfTuning (Uncommitted, experimental patch)

          In a follow-on patch, I would like to write a 10.8.2-specific upgrade test to verify the behavior of soft-(up/down)grade between 10.8.1 and 10.8.2.

          This patch touches the following files:

          M java/storeless/org/apache/derby/impl/storeless/EmptyDictionary.java
          M java/engine/org/apache/derby/impl/sql/compile/CreateSequenceNode.java
          M java/engine/org/apache/derby/impl/sql/compile/NextSequenceNode.java
          M java/engine/org/apache/derby/impl/sql/execute/InsertResultSet.java
          M java/engine/org/apache/derby/impl/sql/execute/BaseActivation.java
          M java/engine/org/apache/derby/impl/sql/execute/InsertConstantAction.java
          M java/engine/org/apache/derby/impl/sql/catalog/SequenceGenerator.java
          M java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java
          A + java/engine/org/apache/derby/impl/sql/catalog/SequenceRange.java
          M java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java
          M java/engine/org/apache/derby/impl/db/BasicDatabase.java
          M java/engine/org/apache/derby/iapi/sql/dictionary/DataDictionary.java
          M java/engine/org/apache/derby/iapi/sql/dictionary/SequenceDescriptor.java
          M java/engine/org/apache/derby/iapi/reference/Property.java
          A + java/engine/org/apache/derby/catalog/SequencePreallocator.java
          M java/engine/org/apache/derby/loc/messages.xml
          M java/shared/org/apache/derby/shared/common/reference/SQLState.java
          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AlterTableTest.java
          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AutoIncrementTest.java
          A + java/testing/org/apache/derbyTesting/functionTests/tests/lang/t_4437_2.dat
          M java/testing/org/apache/derbyTesting/functionTests/tests/lang/SequenceGeneratorTest.java
          M tools/javadoc/publishedapi.ant

          Show
          rhillegas Rick Hillegas added a comment - Backported the following patches from trunk to the 10.8 branch. Tests passed cleanly for me. Committed to 10.8 branch at subversion revision 1141645: 1135226 derby-4437-01-aj-allTestsPass.diff 1135754 derby-4437-02-ac-alterTable-bulkImport-deferredInsert.diff 1137985 derby-4437-04-aa-reclaimUnusedValuesOnShutdown.diff 1138434 derby-4437-05-aa-pluggablePreallocation.diff 1141567 derby-4437-07-ad-biggerDefault_propertyCanBeInteger.diff The following patches were NOT backported: 1136036 derby-4437-03-aa-upgradeTest.diff (10.9-specific upgrade test) derby-4437-06-aa-selfTuning (Uncommitted, experimental patch) In a follow-on patch, I would like to write a 10.8.2-specific upgrade test to verify the behavior of soft-(up/down)grade between 10.8.1 and 10.8.2. This patch touches the following files: M java/storeless/org/apache/derby/impl/storeless/EmptyDictionary.java M java/engine/org/apache/derby/impl/sql/compile/CreateSequenceNode.java M java/engine/org/apache/derby/impl/sql/compile/NextSequenceNode.java M java/engine/org/apache/derby/impl/sql/execute/InsertResultSet.java M java/engine/org/apache/derby/impl/sql/execute/BaseActivation.java M java/engine/org/apache/derby/impl/sql/execute/InsertConstantAction.java M java/engine/org/apache/derby/impl/sql/catalog/SequenceGenerator.java M java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java A + java/engine/org/apache/derby/impl/sql/catalog/SequenceRange.java M java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java M java/engine/org/apache/derby/impl/db/BasicDatabase.java M java/engine/org/apache/derby/iapi/sql/dictionary/DataDictionary.java M java/engine/org/apache/derby/iapi/sql/dictionary/SequenceDescriptor.java M java/engine/org/apache/derby/iapi/reference/Property.java A + java/engine/org/apache/derby/catalog/SequencePreallocator.java M java/engine/org/apache/derby/loc/messages.xml M java/shared/org/apache/derby/shared/common/reference/SQLState.java M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AlterTableTest.java M java/testing/org/apache/derbyTesting/functionTests/tests/lang/AutoIncrementTest.java A + java/testing/org/apache/derbyTesting/functionTests/tests/lang/t_4437_2.dat M java/testing/org/apache/derbyTesting/functionTests/tests/lang/SequenceGeneratorTest.java M tools/javadoc/publishedapi.ant
          Hide
          rhillegas Rick Hillegas added a comment -

          Bumped version on 10.8 branch to 10.8.1.6 so that upgrade testing can tell which distributions contain the work done on this issue.

          Show
          rhillegas Rick Hillegas added a comment - Bumped version on 10.8 branch to 10.8.1.6 so that upgrade testing can tell which distributions contain the work done on this issue.
          Hide
          knutanders Knut Anders Hatlen added a comment -

          What kind of problem did the bumping of the version number solve? The only 10.8 release used in the upgrade tests is 10.8.1.2, which already has a version number distinct from 10.8.1.5.

          Show
          knutanders Knut Anders Hatlen added a comment - What kind of problem did the bumping of the version number solve? The only 10.8 release used in the upgrade tests is 10.8.1.2, which already has a version number distinct from 10.8.1.5.
          Hide
          rhillegas Rick Hillegas added a comment -

          Hi Knut,

          It hasn't caused a problem yet because I haven't checked in any tests which are sensitive to the distinction between 10.8.1.2 and 10.8.1.6. I am writing those tests now. Thanks.

          Show
          rhillegas Rick Hillegas added a comment - Hi Knut, It hasn't caused a problem yet because I haven't checked in any tests which are sensitive to the distinction between 10.8.1.2 and 10.8.1.6. I am writing those tests now. Thanks.
          Hide
          rhillegas Rick Hillegas added a comment - - edited

          Attaching derby-4437-08-aa-10.8upgrade.diff. This patch adds more upgrade tests for the changed behavior of sequences and identity columns. Committed at subversion revision 1142013.

          I ran the upgrade tests against trunk, upgrading from the following versions:

          10.5.3.0
          10.6.1.0
          10.6.2.1
          10.7.1.1
          10.8.1.2
          10.8.1.6

          Touches the following files:

          M java/testing/org/apache/derbyTesting/functionTests/tests/upgradeTests/UpgradeRun.java
          A java/testing/org/apache/derbyTesting/functionTests/tests/upgradeTests/Changes10_8_2.java
          M java/testing/org/apache/derbyTesting/functionTests/tests/upgradeTests/UpgradeChange.java

          Show
          rhillegas Rick Hillegas added a comment - - edited Attaching derby-4437-08-aa-10.8upgrade.diff. This patch adds more upgrade tests for the changed behavior of sequences and identity columns. Committed at subversion revision 1142013. I ran the upgrade tests against trunk, upgrading from the following versions: 10.5.3.0 10.6.1.0 10.6.2.1 10.7.1.1 10.8.1.2 10.8.1.6 Touches the following files: M java/testing/org/apache/derbyTesting/functionTests/tests/upgradeTests/UpgradeRun.java A java/testing/org/apache/derbyTesting/functionTests/tests/upgradeTests/Changes10_8_2.java M java/testing/org/apache/derbyTesting/functionTests/tests/upgradeTests/UpgradeChange.java
          Hide
          rhillegas Rick Hillegas added a comment -

          Ported 1142013 from trunk to 10.8 branch at subversion revision 1142052.

          Show
          rhillegas Rick Hillegas added a comment - Ported 1142013 from trunk to 10.8 branch at subversion revision 1142052.
          Hide
          rhillegas Rick Hillegas added a comment -

          Resolving this issue because I don't plan to any more work on it. Follow-on tasks have been created and linked to this issue.

          Show
          rhillegas Rick Hillegas added a comment - Resolving this issue because I don't plan to any more work on it. Follow-on tasks have been created and linked to this issue.
          Hide
          rhillegas Rick Hillegas added a comment -

          Re-opening this issue. The concurrency improvements were backed out of the trunk by DERBY-5687.

          Show
          rhillegas Rick Hillegas added a comment - Re-opening this issue. The concurrency improvements were backed out of the trunk by DERBY-5687 .
          Hide
          rhillegas Rick Hillegas added a comment -

          Unassigning myself from this issue.

          Show
          rhillegas Rick Hillegas added a comment - Unassigning myself from this issue.
          Hide
          rhillegas Rick Hillegas added a comment -

          A next attempt to improve the concurrency of identity columns could build on Mike' suggestion on DERBY-5493 that we create an internal sequence generator (represented in SYSSEQUENCES) for every identity column. Here are some ideas about this approach:

          1) I think that we could use SYSCOLUMNS.COLUMNDEFAULTID to hold the uuid of the internal sequence (SYSSEQUENCES.SEQUENCEID). I think this should be ok because you can't declare a default value for an identity column. This would make it relatively easy to find the internal sequence backing an identity column.

          2) We could add a SYSCS_UTIL.PEEK_AT_IDENTITY() function to retrieve the instantaneous current value of the identity column. This would be similar to the SYSCS_UTIL.PEEK_AT_SEQUENCE() function introduced by DERBY-5493.

          Show
          rhillegas Rick Hillegas added a comment - A next attempt to improve the concurrency of identity columns could build on Mike' suggestion on DERBY-5493 that we create an internal sequence generator (represented in SYSSEQUENCES) for every identity column. Here are some ideas about this approach: 1) I think that we could use SYSCOLUMNS.COLUMNDEFAULTID to hold the uuid of the internal sequence (SYSSEQUENCES.SEQUENCEID). I think this should be ok because you can't declare a default value for an identity column. This would make it relatively easy to find the internal sequence backing an identity column. 2) We could add a SYSCS_UTIL.PEEK_AT_IDENTITY() function to retrieve the instantaneous current value of the identity column. This would be similar to the SYSCS_UTIL.PEEK_AT_SEQUENCE() function introduced by DERBY-5493 .
          Hide
          rhillegas Rick Hillegas added a comment -

          Linking to derby-5493 because the discussion on that issue may inform a next attempt to improve the concurrency of identity columns.

          Show
          rhillegas Rick Hillegas added a comment - Linking to derby-5493 because the discussion on that issue may inform a next attempt to improve the concurrency of identity columns.
          Hide
          rhillegas Rick Hillegas added a comment -

          Closing this bug because it now duplicates DERBY-6542.

          Show
          rhillegas Rick Hillegas added a comment - Closing this bug because it now duplicates DERBY-6542 .

            People

            • Assignee:
              Unassigned
              Reporter:
              knutanders Knut Anders Hatlen
            • Votes:
              1 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development