Derby
  1. Derby
  2. DERBY-5358

SYSCS_COMPRESS_TABLE failed with conglomerate not found exception

    Details

    • Urgency:
      Normal
    • Issue & fix info:
      High Value Fix

      Description

      When running the D4275.java repro attached to DERBY-4275 (with the patch invalidate-during-invalidation.diff as well as the fix for DERBY-5161 to prevent the select thread from failing) in four parallel processes on the same machine, one of the processes failed with the following stack trace:

      java.sql.SQLException: The exception 'java.sql.SQLException: The conglomerate (4,294,967,295) requested does not exist.' was thrown while evaluating an expression.
      at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:98)
      at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:142)
      at org.apache.derby.impl.jdbc.Util.seeNextException(Util.java:278)
      at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(TransactionResourceImpl.java:407)
      at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(TransactionResourceImpl.java:348)
      at org.apache.derby.impl.jdbc.EmbedConnection.handleException(EmbedConnection.java:2290)
      at org.apache.derby.impl.jdbc.ConnectionChild.handleException(ConnectionChild.java:82)
      at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:1334)
      at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(EmbedPreparedStatement.java:1686)
      at org.apache.derby.impl.jdbc.EmbedPreparedStatement.execute(EmbedPreparedStatement.java:1341)
      at D4275.main(D4275.java:52)
      Caused by: java.sql.SQLException: The exception 'java.sql.SQLException: The conglomerate (4,294,967,295) requested does not exist.' was thrown while evaluating an expression.
      at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:45)
      at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(SQLExceptionFactory40.java:122)
      at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:71)
      ... 10 more
      Caused by: java.sql.SQLException: The conglomerate (4,294,967,295) requested does not exist.
      at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:45)
      at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(SQLExceptionFactory40.java:122)
      at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:71)
      at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Util.java:256)
      at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(TransactionResourceImpl.java:400)
      at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(TransactionResourceImpl.java:348)
      at org.apache.derby.impl.jdbc.EmbedConnection.handleException(EmbedConnection.java:2290)
      at org.apache.derby.impl.jdbc.ConnectionChild.handleException(ConnectionChild.java:82)
      at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:1334)
      at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(EmbedPreparedStatement.java:1686)
      at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdate(EmbedPreparedStatement.java:308)
      at org.apache.derby.catalog.SystemProcedures.SYSCS_COMPRESS_TABLE(SystemProcedures.java:792)
      at org.apache.derby.exe.acd381409ax0131x72b6x8e11x0000037164a81.g0(Unknown Source)
      at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at org.apache.derby.impl.services.reflect.ReflectMethod.invoke(ReflectMethod.java:46)
      at org.apache.derby.impl.sql.execute.CallStatementResultSet.open(CallStatementResultSet.java:75)
      at org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(GenericPreparedStatement.java:448)
      at org.apache.derby.impl.sql.GenericPreparedStatement.execute(GenericPreparedStatement.java:319)
      at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:1242)
      ... 3 more
      Caused by: ERROR XSAI2: The conglomerate (4,294,967,295) requested does not exist.
      at org.apache.derby.iapi.error.StandardException.newException(StandardException.java:278)
      at org.apache.derby.impl.store.access.RAMAccessManager.getFactoryFromConglomId(RAMAccessManager.java:382)
      at org.apache.derby.impl.store.access.RAMAccessManager.conglomCacheFind(RAMAccessManager.java:482)
      at org.apache.derby.impl.store.access.RAMTransaction.findExistingConglomerate(RAMTransaction.java:394)
      at org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java:1308)
      at org.apache.derby.impl.sql.execute.DDLConstantAction.lockTableForDDL(DDLConstantAction.java:252)
      at org.apache.derby.impl.sql.execute.AlterTableConstantAction.executeConstantActionBody(AlterTableConstantAction.java:364)
      at org.apache.derby.impl.sql.execute.AlterTableConstantAction.executeConstantAction(AlterTableConstantAction.java:275)
      at org.apache.derby.impl.sql.execute.MiscResultSet.open(MiscResultSet.java:61)
      at org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(GenericPreparedStatement.java:448)
      at org.apache.derby.impl.sql.GenericPreparedStatement.execute(GenericPreparedStatement.java:319)
      at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:1242)
      ... 15 more
      Test stopped after 9342310 ms

      The conglomerate number 4,294,967,295 looks suspicious, as it's equal to 2^32-1. Perhaps it's hitting some internal limit on the number of conglomerates? The test case used the in-memory back-end.

      1. volatile-v2.diff
        2 kB
        Knut Anders Hatlen
      2. volatile.diff
        1 kB
        Knut Anders Hatlen
      3. MultiThreadedReadAfterDDL.java
        2 kB
        Knut Anders Hatlen

        Issue Links

          Activity

          Hide
          Knut Anders Hatlen added a comment -

          Saw this again when running D4275.java. Same stack trace, same conglomerate number, but this time it happened a lot sooner (after 5 minutes).

          Show
          Knut Anders Hatlen added a comment - Saw this again when running D4275.java. Same stack trace, same conglomerate number, but this time it happened a lot sooner (after 5 minutes).
          Hide
          Kristian Waagan added a comment -

          Again with the in-memory back end? If so, I can start a few runs with the on-disk back end.

          Show
          Kristian Waagan added a comment - Again with the in-memory back end? If so, I can start a few runs with the on-disk back end.
          Hide
          Knut Anders Hatlen added a comment -

          Yes, it was with the in-memory back end. I used the D4275.java repro that's attached to DERBY-4275 unmodified, and I ran it with trunk patched with the 1a patch from DERBY-5406, sane build.

          You may see other conglomerate does not exist errors being thrown because of DERBY-5406, but the interesting ones for this issue are those that have SystemProcedures.SYSCS_COMPRESS_TABLE() somewhere in the stack.

          Show
          Knut Anders Hatlen added a comment - Yes, it was with the in-memory back end. I used the D4275.java repro that's attached to DERBY-4275 unmodified, and I ran it with trunk patched with the 1a patch from DERBY-5406 , sane build. You may see other conglomerate does not exist errors being thrown because of DERBY-5406 , but the interesting ones for this issue are those that have SystemProcedures.SYSCS_COMPRESS_TABLE() somewhere in the stack.
          Hide
          Mike Matrigali added a comment -

          Triage for 10.9. leaving normal urgency unless it found that the issue leaves db corrupted in some way. The message has the
          feel of a temporary timing issue.

          Show
          Mike Matrigali added a comment - Triage for 10.9. leaving normal urgency unless it found that the issue leaves db corrupted in some way. The message has the feel of a temporary timing issue.
          Hide
          Knut Anders Hatlen added a comment -

          I think I found the problem that's causing this. There's a race condition in TableDescriptor.getHeapConglomerateId():

          /* If we've already cached the heap conglomerate number, then

          • simply return it.
            */
            if (heapConglomNumber != -1) { return heapConglomNumber; }

          ... (find the heap conglomerate in the list of conglomerates) ...

          heapConglomNumber = cd.getConglomerateNumber();

          return heapConglomNumber;

          I instrumented this class and found that it never set heapConglomNumber to 4,294,967,295, but the method still returned that value some times.

          The problem is that heapConglomNumber is a long, and the Java spec doesn't guarantee that reads/writes of long values are atomic.

          So what seems to happen, is:

          • Two threads (T1 and T2) call getHeapConglomerateId() on the same TableDescriptor at about the same time, and no other calls to getHeapConglomerateId() have been made on that object before, so heapConglomNumber is initially -1.
          • T1 goes ahead finding the real conglomerate number and writing it to heapConglomNumber.
          • At the same time, T2 reads heapConglomNumber in order to check if it's already cached. However, since T1's write was not atomic, it only sees half of it. That's enough to make it see that the cached conglomerate number is -1, so that it concludes that it can use it, but the number it sees is not the right one.

          If T2 happens to see only the most significant half of the conglomerate number written by T1, that half will probably be all zeros (because it's not very likely that more than 4 billion conglomerates have been created). The bits in the least significant half will in that case be all ones (because the initial value is -1, which is all ones in two's complement). The returned value will therefore be 0x00000000ffffffff == 4,294,967,295, as seen in the error in the bug description.

          I've also seen variants where the returned number is a negative one. That happens if T2 instead sees the least significant half of the correct column number, and the most significant half of the initial value -1. For example, if the conglomerate number is 344624, the error message will say: The conglomerate (-4 294 622 672) requested does not exist.

          Show
          Knut Anders Hatlen added a comment - I think I found the problem that's causing this. There's a race condition in TableDescriptor.getHeapConglomerateId(): /* If we've already cached the heap conglomerate number, then simply return it. */ if (heapConglomNumber != -1) { return heapConglomNumber; } ... (find the heap conglomerate in the list of conglomerates) ... heapConglomNumber = cd.getConglomerateNumber(); return heapConglomNumber; I instrumented this class and found that it never set heapConglomNumber to 4,294,967,295, but the method still returned that value some times. The problem is that heapConglomNumber is a long, and the Java spec doesn't guarantee that reads/writes of long values are atomic. So what seems to happen, is: Two threads (T1 and T2) call getHeapConglomerateId() on the same TableDescriptor at about the same time, and no other calls to getHeapConglomerateId() have been made on that object before, so heapConglomNumber is initially -1. T1 goes ahead finding the real conglomerate number and writing it to heapConglomNumber. At the same time, T2 reads heapConglomNumber in order to check if it's already cached. However, since T1's write was not atomic, it only sees half of it. That's enough to make it see that the cached conglomerate number is -1, so that it concludes that it can use it, but the number it sees is not the right one. If T2 happens to see only the most significant half of the conglomerate number written by T1, that half will probably be all zeros (because it's not very likely that more than 4 billion conglomerates have been created). The bits in the least significant half will in that case be all ones (because the initial value is -1, which is all ones in two's complement). The returned value will therefore be 0x00000000ffffffff == 4,294,967,295, as seen in the error in the bug description. I've also seen variants where the returned number is a negative one. That happens if T2 instead sees the least significant half of the correct column number, and the most significant half of the initial value -1. For example, if the conglomerate number is 344624, the error message will say: The conglomerate (-4 294 622 672) requested does not exist.
          Hide
          Knut Anders Hatlen added a comment -

          I'm running tests with the heapConglomNumber field declared volatile to enforce atomic updates, to see if that makes these errors go away.

          Show
          Knut Anders Hatlen added a comment - I'm running tests with the heapConglomNumber field declared volatile to enforce atomic updates, to see if that makes these errors go away.
          Hide
          Knut Anders Hatlen added a comment -

          I've had the repro running for more than three hours now with the attached patch (volatile.diff). On the same platform, I usually saw errors popping up after 5 to 15 minutes without the patch.

          The patch makes the fields ConglomerateDescriptor.conglomerateNumber and TableDescriptor.heapConglomNumber volatile to ensure that reads/writes are atomic.

          All the regression tests passed too.

          Show
          Knut Anders Hatlen added a comment - I've had the repro running for more than three hours now with the attached patch (volatile.diff). On the same platform, I usually saw errors popping up after 5 to 15 minutes without the patch. The patch makes the fields ConglomerateDescriptor.conglomerateNumber and TableDescriptor.heapConglomNumber volatile to ensure that reads/writes are atomic. All the regression tests passed too.
          Hide
          Mike Matrigali added a comment -

          Great catch on this knut.

          When I see a volatile I usually think there is a problem with synchronization.
          Given that this fix seems to be helping, does that indicate that there is missing synchronization in one or both of these classes, or maybe the calling class? Is it expected that 2 threads would be updating this class
          at the same time? There is no documentation in the TableDescriptor class to indicate multithreaded
          expectations.

          It would be good to understand if the bug is that TableDescriptor is not properly handling concurrent access,
          or if the bug is that other code should be preventing concurrent access to TableDescriptor.

          I do see that TableDescriptor.getStatistics() and isValid() is synchronized, so maybe that indicates all updating
          methods, and all methods that look at those updated fields should be synchronized.

          I did a quick scan by eye of TableDescriptor and found the following fields updated in places other than
          the constructors, it was just a quick read through so there may be more:
          constraintDescriptorList
          triggerDescriptorList
          indexStatsUpToDate

          maybe update?:
          conglomerateDescriptorList

          Show
          Mike Matrigali added a comment - Great catch on this knut. When I see a volatile I usually think there is a problem with synchronization. Given that this fix seems to be helping, does that indicate that there is missing synchronization in one or both of these classes, or maybe the calling class? Is it expected that 2 threads would be updating this class at the same time? There is no documentation in the TableDescriptor class to indicate multithreaded expectations. It would be good to understand if the bug is that TableDescriptor is not properly handling concurrent access, or if the bug is that other code should be preventing concurrent access to TableDescriptor. I do see that TableDescriptor.getStatistics() and isValid() is synchronized, so maybe that indicates all updating methods, and all methods that look at those updated fields should be synchronized. I did a quick scan by eye of TableDescriptor and found the following fields updated in places other than the constructors, it was just a quick read through so there may be more: constraintDescriptorList triggerDescriptorList indexStatsUpToDate maybe update?: conglomerateDescriptorList
          Hide
          Mike Matrigali added a comment -

          While reading through TableDescriptor I noticed that resetHeapConglomNumber() which is
          used for global temporary tables also sets this field to -1. I think in past I have seen user
          unreproducible issues with global temp tables and errors with strange large negative conglomerate
          numbers, so likley related to this issue being tracked here.

          Show
          Mike Matrigali added a comment - While reading through TableDescriptor I noticed that resetHeapConglomNumber() which is used for global temporary tables also sets this field to -1. I think in past I have seen user unreproducible issues with global temp tables and errors with strange large negative conglomerate numbers, so likley related to this issue being tracked here.
          Hide
          Mike Matrigali added a comment -

          once fix is fully understood it would be good to update this issue with what user actions can cause this
          problem. Is it just SYSCS_COMPRESS_TABLE, or is it any operation that can result in conglomerate
          numbers changing? Maybe test case can be expanded to show problem also exists in the following
          cases (basically cases where the system creates a new set of files for each table and index, copies data to
          the new files, and then updates catalogs underneath to point at new files):
          o truncate table
          o alter table that adds a non-nullable field
          o insert into with replace
          o other alter table?
          o any others ?

          Show
          Mike Matrigali added a comment - once fix is fully understood it would be good to update this issue with what user actions can cause this problem. Is it just SYSCS_COMPRESS_TABLE, or is it any operation that can result in conglomerate numbers changing? Maybe test case can be expanded to show problem also exists in the following cases (basically cases where the system creates a new set of files for each table and index, copies data to the new files, and then updates catalogs underneath to point at new files): o truncate table o alter table that adds a non-nullable field o insert into with replace o other alter table? o any others ?
          Hide
          Knut Anders Hatlen added a comment -

          It looks like only the fix in TableDescriptor is needed for this
          particular problem, by the way. The repro is running fine without the
          change in ConglomerateDescriptor.

          I'm not sure whether this is a general synchronization issue in
          TableDescriptor, or just a problem with the one specific method
          getHeapConglomerateId(). The other occurrences of fields being
          modified, seem to be in setter methods or other methods whose names
          suggest they will be modifying the descriptor. I would assume that the
          callers of those methods have obtained an exclusive table lock, or use
          some other mechanism to ensure exclusive access, before they call the
          methods.

          What's so special about getHeapConglomerateId(), is that the write
          happens inside a method whose name suggests it'll just do a read,
          because of lazy initialization of the field. The callers cannot
          reasonably be expected to know this and obtain an exclusive table lock
          just to fetch the id of the heap conglomerate, so extra protection is
          needed for this method.

          As to when this could happen, I think it could happen any time two
          threads concurrently ask for the conglomerate id when no other thread
          has done the same before. It could be as simple as two threads
          selecting from the same table right after the database has been booted
          (although that particular may be hidden by the last fix that went into
          DERBY-5406, where we'd re-try if a compilation fails with conglomerate
          not found).

          Also, after any DDL operation, the TD cache in the data dictionary is
          cleared, so any operation that needs a TD after a DDL operation, also
          if the DDL operation didn't touch the table described by the TD, will
          get an instance whose heapConglomerateNumber field is uninitialized.

          So, theoretically, any two threads accessing the same table after a
          DDL operation, could encounter this problem.

          Show
          Knut Anders Hatlen added a comment - It looks like only the fix in TableDescriptor is needed for this particular problem, by the way. The repro is running fine without the change in ConglomerateDescriptor. I'm not sure whether this is a general synchronization issue in TableDescriptor, or just a problem with the one specific method getHeapConglomerateId(). The other occurrences of fields being modified, seem to be in setter methods or other methods whose names suggest they will be modifying the descriptor. I would assume that the callers of those methods have obtained an exclusive table lock, or use some other mechanism to ensure exclusive access, before they call the methods. What's so special about getHeapConglomerateId(), is that the write happens inside a method whose name suggests it'll just do a read, because of lazy initialization of the field. The callers cannot reasonably be expected to know this and obtain an exclusive table lock just to fetch the id of the heap conglomerate, so extra protection is needed for this method. As to when this could happen, I think it could happen any time two threads concurrently ask for the conglomerate id when no other thread has done the same before. It could be as simple as two threads selecting from the same table right after the database has been booted (although that particular may be hidden by the last fix that went into DERBY-5406 , where we'd re-try if a compilation fails with conglomerate not found). Also, after any DDL operation, the TD cache in the data dictionary is cleared, so any operation that needs a TD after a DDL operation, also if the DDL operation didn't touch the table described by the TD, will get an instance whose heapConglomerateNumber field is uninitialized. So, theoretically, any two threads accessing the same table after a DDL operation, could encounter this problem.
          Hide
          Knut Anders Hatlen added a comment -

          Attaching an alternative repro (MultiThreadedReadAfterDDL.java). In my environment (Solaris 11, Java SE 7u4) it typically fails after one to two minutes (I've seen it vary from 10 seconds to 4 minutes).

          ERROR XSAI2: The conglomerate (4,294,967,295) requested does not exist.
          at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
          at org.apache.derby.impl.store.access.RAMAccessManager.getFactoryFromConglomId(Unknown Source)
          at org.apache.derby.impl.store.access.RAMAccessManager.conglomCacheFind(Unknown Source)
          at org.apache.derby.impl.store.access.RAMTransaction.findExistingConglomerate(Unknown Source)
          at org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(Unknown Source)
          at org.apache.derby.iapi.db.ConsistencyChecker.checkTable(Unknown Source)
          at org.apache.derby.catalog.SystemProcedures.SYSCS_CHECK_TABLE(Unknown Source)
          at org.apache.derby.exe.ac45b300a8x0137xa7a6xf3e3x000003616e100.e0(Unknown Source)
          at org.apache.derby.impl.services.reflect.DirectCall.invoke(Unknown Source)
          at org.apache.derby.impl.sql.execute.RowResultSet.getNextRowCore(Unknown Source)
          at org.apache.derby.impl.sql.execute.BasicNoPutResultSetImpl.getNextRow(Unknown Source)
          at org.apache.derby.impl.jdbc.EmbedResultSet.movePosition(Unknown Source)
          at org.apache.derby.impl.jdbc.EmbedResultSet.next(Unknown Source)
          at MultiThreadedReadAfterDDL$1.run(MultiThreadedReadAfterDDL.java:32)

          What the repro does is:

          1) The main thread creates a table called TMP and immediately rolls it back. Because of the DDL, the TD cache in the dictionary is cleared.

          2) As soon as the main thread is done, 10 other threads call VALUES SYSCS_UTIL.SYSCS_CHECK_TABLE('APP', 'T') concurrently. Note that it checks a different table than the one touched by the DDL in the main thread.

          3) Once all threads are done executing SYSCS_CHECK_TABLE, repeat the procedure from step 1.

          Show
          Knut Anders Hatlen added a comment - Attaching an alternative repro (MultiThreadedReadAfterDDL.java). In my environment (Solaris 11, Java SE 7u4) it typically fails after one to two minutes (I've seen it vary from 10 seconds to 4 minutes). ERROR XSAI2: The conglomerate (4,294,967,295) requested does not exist. at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.impl.store.access.RAMAccessManager.getFactoryFromConglomId(Unknown Source) at org.apache.derby.impl.store.access.RAMAccessManager.conglomCacheFind(Unknown Source) at org.apache.derby.impl.store.access.RAMTransaction.findExistingConglomerate(Unknown Source) at org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(Unknown Source) at org.apache.derby.iapi.db.ConsistencyChecker.checkTable(Unknown Source) at org.apache.derby.catalog.SystemProcedures.SYSCS_CHECK_TABLE(Unknown Source) at org.apache.derby.exe.ac45b300a8x0137xa7a6xf3e3x000003616e100.e0(Unknown Source) at org.apache.derby.impl.services.reflect.DirectCall.invoke(Unknown Source) at org.apache.derby.impl.sql.execute.RowResultSet.getNextRowCore(Unknown Source) at org.apache.derby.impl.sql.execute.BasicNoPutResultSetImpl.getNextRow(Unknown Source) at org.apache.derby.impl.jdbc.EmbedResultSet.movePosition(Unknown Source) at org.apache.derby.impl.jdbc.EmbedResultSet.next(Unknown Source) at MultiThreadedReadAfterDDL$1.run(MultiThreadedReadAfterDDL.java:32) What the repro does is: 1) The main thread creates a table called TMP and immediately rolls it back. Because of the DDL, the TD cache in the dictionary is cleared. 2) As soon as the main thread is done, 10 other threads call VALUES SYSCS_UTIL.SYSCS_CHECK_TABLE('APP', 'T') concurrently. Note that it checks a different table than the one touched by the DDL in the main thread. 3) Once all threads are done executing SYSCS_CHECK_TABLE, repeat the procedure from step 1.
          Hide
          Knut Anders Hatlen added a comment -

          Attaching an updated patch (volatile-v2.diff), which only touches the TableDescriptor class. The new patch adds a javadoc comment that explains why the field is declared volatile. It also removes an unused variable from the getHeapConglomerateId() method.

          Show
          Knut Anders Hatlen added a comment - Attaching an updated patch (volatile-v2.diff), which only touches the TableDescriptor class. The new patch adds a javadoc comment that explains why the field is declared volatile. It also removes an unused variable from the getHeapConglomerateId() method.
          Hide
          Knut Anders Hatlen added a comment -

          Committed revision 1354015.
          Leaving the issue open for back-porting.

          Show
          Knut Anders Hatlen added a comment - Committed revision 1354015. Leaving the issue open for back-porting.
          Hide
          Knut Anders Hatlen added a comment -

          Backported to 10.9 (revision 1359058) and 10.8 (revision 1359059). Closing the issue.

          Show
          Knut Anders Hatlen added a comment - Backported to 10.9 (revision 1359058) and 10.8 (revision 1359059). Closing the issue.
          Hide
          Kathey Marsden added a comment -

          Reopen to mark affects version 10.5

          Show
          Kathey Marsden added a comment - Reopen to mark affects version 10.5
          Hide
          Mamta A. Satoor added a comment -

          I will look at backporting this further

          Show
          Mamta A. Satoor added a comment - I will look at backporting this further
          Hide
          Mamta A. Satoor added a comment -

          Backported to 10.7 with revision 1388565

          Show
          Mamta A. Satoor added a comment - Backported to 10.7 with revision 1388565
          Hide
          Mamta A. Satoor added a comment -

          Backported to 10.6 with revision 1388678

          Show
          Mamta A. Satoor added a comment - Backported to 10.6 with revision 1388678
          Hide
          Mamta A. Satoor added a comment -

          Backported to 10.5 with revision 1388730

          Show
          Mamta A. Satoor added a comment - Backported to 10.5 with revision 1388730

            People

            • Assignee:
              Knut Anders Hatlen
              Reporter:
              Knut Anders Hatlen
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development