Cassandra
  1. Cassandra
  2. CASSANDRA-2653

index scan errors out when zero columns are requested

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Minor Minor
    • Resolution: Fixed
    • Fix Version/s: 0.7.7, 0.8.2
    • Component/s: Core
    • Labels:
      None

      Description

      As reported by Tyler Hobbs as an addendum to CASSANDRA-2401,

      ERROR 16:13:38,864 Fatal exception in thread Thread[ReadStage:16,5,main]
      java.lang.AssertionError: No data found for SliceQueryFilter(start=java.nio.HeapByteBuffer[pos=10 lim=10 cap=30], finish=java.nio.HeapByteBuffer[pos=17 lim=17 cap=30], reversed=false, count=0] in DecoratedKey(81509516161424251288255223397843705139, 6b657931):QueryPath(columnFamilyName='cf', superColumnName='null', columnName='null') (original filter SliceQueryFilter(start=java.nio.HeapByteBuffer[pos=10 lim=10 cap=30], finish=java.nio.HeapByteBuffer[pos=17 lim=17 cap=30], reversed=false, count=0]) from expression 'cf.626972746864617465 EQ 1'
      	at org.apache.cassandra.db.ColumnFamilyStore.scan(ColumnFamilyStore.java:1517)
      	at org.apache.cassandra.service.IndexScanVerbHandler.doVerb(IndexScanVerbHandler.java:42)
      	at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      	at java.lang.Thread.run(Thread.java:662)
      
      1. 0001-Fix-scan-issue.patch
        4 kB
        Sylvain Lebresne
      2. 2653_v3.patch
        8 kB
        Sylvain Lebresne
      3. 2653_v2.patch
        8 kB
        Sylvain Lebresne
      4. 0001-Handle-null-returns-in-data-index-query-v0.7.patch
        2 kB
        Sylvain Lebresne
      5. 0001-Handle-data-get-returning-null-in-secondary-indexes.patch
        2 kB
        Sylvain Lebresne
      6. 0001-Reset-SSTII-in-EchoedRow-constructor.patch
        3 kB
        Sylvain Lebresne
      7. ASF.LICENSE.NOT.GRANTED--v1-0001-CASSANDRA-2653-reproduce-regression.txt
        8 kB
        T Jake Luciani

        Activity

        Jonathan Ellis created issue -
        Tyler Hobbs made changes -
        Field Original Value New Value
        Affects Version/s 0.8.0 beta 2 [ 12316379 ]
        T Jake Luciani made changes -
        Hide
        T Jake Luciani added a comment -

        Attached testcase reproduces the error every time

        Run like:

        ant long-test -Dtest.name=IndexCorruptionTest

        Show
        T Jake Luciani added a comment - Attached testcase reproduces the error every time Run like: ant long-test -Dtest.name=IndexCorruptionTest
        Hide
        T Jake Luciani added a comment -
        long-test:
             [echo] running long tests
            [junit] WARNING: multiple versions of ant detected in path for junit 
            [junit]          jar:file:/usr/share/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
            [junit]      and jar:file:/Users/jake/workspace/cassandra-git/build/lib/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
            [junit] Testsuite: org.apache.cassandra.db.IndexCorruptionTest
            [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 247.427 sec
            [junit] 
            [junit] ------------- Standard Error -----------------
            [junit] ERROR 16:31:03,330 Fatal exception in thread Thread[ReadStage:5,5,main]
            [junit] java.lang.AssertionError: No data found for NamesQueryFilter(columns=java.nio.HeapByteBuffer[pos=12 lim=16 cap=17]) in DecoratedKey(Token(bytes[004600460048004d00540049005900590048005400460059004a0048004b0055004e00550048004b00530055005400480055004b004f004b004a00460058004600000001000100010001000100010001000100e3000100010001000100e3000100010001000100e3000100010001000100e30001000100010001000100010001000100010001000100010000000100010001000100010001000100010003000100010001000100030001000100010001000300010001000100010003000100010001000100010001000100010001000100010001]), 30303237623366662d326230662d343235632d386332352d616362326335393534306530):QueryPath(columnFamilyName='inode', superColumnName='null', columnName='null') (original filter NamesQueryFilter(columns=java.nio.HeapByteBuffer[pos=12 lim=16 cap=17])) from expression 'inode.73656e74696e656c EQ 78'
            [junit] 	at org.apache.cassandra.db.ColumnFamilyStore.scan(ColumnFamilyStore.java:1517)
            [junit] 	at org.apache.cassandra.service.IndexScanVerbHandler.doVerb(IndexScanVerbHandler.java:42)
            [junit] 	at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72)
            [junit] 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
            [junit] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
            [junit] 	at java.lang.Thread.run(Thread.java:680)
            [junit] ------------- ---------------- ---------------
            [junit] Testcase: runTest(org.apache.cassandra.db.IndexCorruptionTest):	Caused an ERROR
            [junit] TimedOutException()
            [junit] java.io.IOException: TimedOutException()
            [junit] 	at org.apache.cassandra.db.IndexCorruptionTest.listDeepSubPaths(IndexCorruptionTest.java:107)
            [junit] 	at org.apache.cassandra.db.IndexCorruptionTest.runTest(IndexCorruptionTest.java:64)
            [junit] Caused by: TimedOutException()
            [junit] 	at org.apache.cassandra.thrift.Cassandra$get_indexed_slices_result.read(Cassandra.java:13801)
            [junit] 	at org.apache.cassandra.thrift.Cassandra$Client.recv_get_indexed_slices(Cassandra.java:810)
            [junit] 	at org.apache.cassandra.thrift.Cassandra$Client.get_indexed_slices(Cassandra.java:782)
            [junit] 	at org.apache.cassandra.db.IndexCorruptionTest.listDeepSubPaths(IndexCorruptionTest.java:90)
            [junit] 
            [junit] 
            [junit] Test org.apache.cassandra.db.IndexCorruptionTest FAILED
        
        BUILD FAILED
        /Users/jake/workspace/cassandra-git/build.xml:1082: The following error occurred while executing this line:
        /Users/jake/workspace/cassandra-git/build.xml:1037: Some long test(s) failed.
        
        Total time: 4 minutes 14 seconds
        
        
        Show
        T Jake Luciani added a comment - long-test: [echo] running long tests [junit] WARNING: multiple versions of ant detected in path for junit [junit] jar:file:/usr/share/ant/lib/ant.jar!/org/apache/tools/ant/Project.class [junit] and jar:file:/Users/jake/workspace/cassandra-git/build/lib/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class [junit] Testsuite: org.apache.cassandra.db.IndexCorruptionTest [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 247.427 sec [junit] [junit] ------------- Standard Error ----------------- [junit] ERROR 16:31:03,330 Fatal exception in thread Thread[ReadStage:5,5,main] [junit] java.lang.AssertionError: No data found for NamesQueryFilter(columns=java.nio.HeapByteBuffer[pos=12 lim=16 cap=17]) in DecoratedKey(Token(bytes[004600460048004d00540049005900590048005400460059004a0048004b0055004e00550048004b00530055005400480055004b004f004b004a00460058004600000001000100010001000100010001000100e3000100010001000100e3000100010001000100e3000100010001000100e30001000100010001000100010001000100010001000100010000000100010001000100010001000100010003000100010001000100030001000100010001000300010001000100010003000100010001000100010001000100010001000100010001]), 30303237623366662d326230662d343235632d386332352d616362326335393534306530):QueryPath(columnFamilyName='inode', superColumnName='null', columnName='null') (original filter NamesQueryFilter(columns=java.nio.HeapByteBuffer[pos=12 lim=16 cap=17])) from expression 'inode.73656e74696e656c EQ 78' [junit] at org.apache.cassandra.db.ColumnFamilyStore.scan(ColumnFamilyStore.java:1517) [junit] at org.apache.cassandra.service.IndexScanVerbHandler.doVerb(IndexScanVerbHandler.java:42) [junit] at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72) [junit] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [junit] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [junit] at java.lang.Thread.run(Thread.java:680) [junit] ------------- ---------------- --------------- [junit] Testcase: runTest(org.apache.cassandra.db.IndexCorruptionTest): Caused an ERROR [junit] TimedOutException() [junit] java.io.IOException: TimedOutException() [junit] at org.apache.cassandra.db.IndexCorruptionTest.listDeepSubPaths(IndexCorruptionTest.java:107) [junit] at org.apache.cassandra.db.IndexCorruptionTest.runTest(IndexCorruptionTest.java:64) [junit] Caused by: TimedOutException() [junit] at org.apache.cassandra.thrift.Cassandra$get_indexed_slices_result.read(Cassandra.java:13801) [junit] at org.apache.cassandra.thrift.Cassandra$Client.recv_get_indexed_slices(Cassandra.java:810) [junit] at org.apache.cassandra.thrift.Cassandra$Client.get_indexed_slices(Cassandra.java:782) [junit] at org.apache.cassandra.db.IndexCorruptionTest.listDeepSubPaths(IndexCorruptionTest.java:90) [junit] [junit] [junit] Test org.apache.cassandra.db.IndexCorruptionTest FAILED BUILD FAILED /Users/jake/workspace/cassandra-git/build.xml:1082: The following error occurred while executing this line: /Users/jake/workspace/cassandra-git/build.xml:1037: Some long test(s) failed. Total time: 4 minutes 14 seconds
        Hide
        T Jake Luciani added a comment - - edited

        Definitely related to compaction...

        The memtable flushes minute and the test fails > 4

        If you change the min compaction thresh to 2 it fails > 2

        Show
        T Jake Luciani added a comment - - edited Definitely related to compaction... The memtable flushes minute and the test fails > 4 If you change the min compaction thresh to 2 it fails > 2
        Sylvain Lebresne made changes -
        Assignee Jonathan Ellis [ jbellis ] Sylvain Lebresne [ slebresne ]
        Hide
        Sylvain Lebresne added a comment -

        This is indeed compaction related (but not related to secondary indexing at
        all). The problem is that compaction may lose some rows.

        Because of the way the ReducingIterator works, when we create a new

        {Pre|Lazy|Echoed}

        CompactedRow, we have already decoded the next row key and
        the file pointer if after that next row key. Both PreCompactedRow and
        LazyCompactedRow handle this correctly by "resetting" their
        SSTableIdentityIterator before reading (SSTII.getColumnFamilyWithColumns()
        does it for PreCompactedRow and LazilyCompactedRow calls SSTII.reset()
        directly). But EchoedRow doesn't handle this correctly. Hence when
        EchoedRow.isEmpty() is called, it will call SSTII.hasNext(), that will compare
        the current file pointer to the finishedAt value of the iterator. The pointer
        being on the next row, this test will always fail and the row will be skipped.

        Attaching a patch against 0.8 with a (smaller) unit test.

        Note that luckily this doesn't affect 0.7, because it only uses EchoedRow for
        cleanup compactions and clean compactions does not use ReducingIterator (and
        thus, the underlying SSTII won't have changed when the EchoedRow is built).
        I would still be in favor of committing the patch there too, just to make sure
        we don't hit this later.

        Show
        Sylvain Lebresne added a comment - This is indeed compaction related (but not related to secondary indexing at all). The problem is that compaction may lose some rows. Because of the way the ReducingIterator works, when we create a new {Pre|Lazy|Echoed} CompactedRow, we have already decoded the next row key and the file pointer if after that next row key. Both PreCompactedRow and LazyCompactedRow handle this correctly by "resetting" their SSTableIdentityIterator before reading (SSTII.getColumnFamilyWithColumns() does it for PreCompactedRow and LazilyCompactedRow calls SSTII.reset() directly). But EchoedRow doesn't handle this correctly. Hence when EchoedRow.isEmpty() is called, it will call SSTII.hasNext(), that will compare the current file pointer to the finishedAt value of the iterator. The pointer being on the next row, this test will always fail and the row will be skipped. Attaching a patch against 0.8 with a (smaller) unit test. Note that luckily this doesn't affect 0.7, because it only uses EchoedRow for cleanup compactions and clean compactions does not use ReducingIterator (and thus, the underlying SSTII won't have changed when the EchoedRow is built). I would still be in favor of committing the patch there too, just to make sure we don't hit this later.
        Sylvain Lebresne made changes -
        Sylvain Lebresne made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Fix Version/s 0.8.0 [ 12316403 ]
        Fix Version/s 0.7.7 [ 12316431 ]
        Hide
        Jonathan Ellis added a comment -

        +1 for 0.7 / 0.8

        Show
        Jonathan Ellis added a comment - +1 for 0.7 / 0.8
        Hide
        Hudson added a comment -

        Integrated in Cassandra-0.7 #500 (See https://builds.apache.org/hudson/job/Cassandra-0.7/500/)
        Reset SSTII in EchoedRow iterator (see CASSANDRA-2653)

        slebresne : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1129151
        Files :

        • /cassandra/branches/cassandra-0.7/test/unit/org/apache/cassandra/db/CompactionsTest.java
        • /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/CompactionManager.java
        Show
        Hudson added a comment - Integrated in Cassandra-0.7 #500 (See https://builds.apache.org/hudson/job/Cassandra-0.7/500/ ) Reset SSTII in EchoedRow iterator (see CASSANDRA-2653 ) slebresne : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1129151 Files : /cassandra/branches/cassandra-0.7/test/unit/org/apache/cassandra/db/CompactionsTest.java /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/CompactionManager.java
        Jonathan Ellis made changes -
        Fix Version/s 0.8.1 [ 12316368 ]
        Fix Version/s 0.8.0 [ 12316403 ]
        Sylvain Lebresne made changes -
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Resolution Fixed [ 1 ]
        Hide
        Jonathan Ellis added a comment -

        Is this actually fixed for the zero-columns-requested original problem?

        Show
        Jonathan Ellis added a comment - Is this actually fixed for the zero-columns-requested original problem?
        Hide
        Sylvain Lebresne added a comment -

        This really primarily fixes the error from Jake's test cases. I'll have to admit that's the only I looked. I did not realize the original problem was not necessarily related and so it is very possible (even likely) this does not fix the zero-columns-requested problem.

        Show
        Sylvain Lebresne added a comment - This really primarily fixes the error from Jake's test cases. I'll have to admit that's the only I looked. I did not realize the original problem was not necessarily related and so it is very possible (even likely) this does not fix the zero-columns-requested problem.
        Hide
        Jonathan Ellis added a comment -

        reopening to fix "the tyler issue."

        Show
        Jonathan Ellis added a comment - reopening to fix "the tyler issue."
        Jonathan Ellis made changes -
        Resolution Fixed [ 1 ]
        Status Resolved [ 5 ] Reopened [ 4 ]
        Hide
        Sylvain Lebresne added a comment -

        The "Tyler" problem is actually not limited to 0 column query. The problem is that when we query the rows for data, we use whatever filter the user provided (there's a number of optimiziation in the case we have more than 1 clause but that doesn't really matter for our problem). The thing is, there is no guarantee that whatever that filter is, it will include the column of the primary clause (having a column count of 0 is just one case where we're sure it won't include it). Thus the assertion that something will be returned is bogus.

        Attaching a patch (against 0.8) to fix. Note that this mean we have no way to assert the sanity of the index during a read, unless we force the querying of the primary index clause, but this will have a performance impact (and a non negligible one in cases this would force us to do a new query just for that).

        Show
        Sylvain Lebresne added a comment - The "Tyler" problem is actually not limited to 0 column query. The problem is that when we query the rows for data, we use whatever filter the user provided (there's a number of optimiziation in the case we have more than 1 clause but that doesn't really matter for our problem). The thing is, there is no guarantee that whatever that filter is, it will include the column of the primary clause (having a column count of 0 is just one case where we're sure it won't include it). Thus the assertion that something will be returned is bogus. Attaching a patch (against 0.8) to fix. Note that this mean we have no way to assert the sanity of the index during a read, unless we force the querying of the primary index clause, but this will have a performance impact (and a non negligible one in cases this would force us to do a new query just for that).
        Sylvain Lebresne made changes -
        Hide
        Sylvain Lebresne added a comment -

        This also affects 0.7 actually so attaching a patch for 0.7.

        Show
        Sylvain Lebresne added a comment - This also affects 0.7 actually so attaching a patch for 0.7.
        Sylvain Lebresne made changes -
        Sylvain Lebresne made changes -
        Fix Version/s 0.7.7 [ 12316431 ]
        Affects Version/s 0.7.6 [ 12316377 ]
        Hide
        Jonathan Ellis added a comment -

        Is there a way we can keep a sanity check here? CASSANDRA-2401 was not so long ago.

        Show
        Jonathan Ellis added a comment - Is there a way we can keep a sanity check here? CASSANDRA-2401 was not so long ago.
        Hide
        Sylvain Lebresne added a comment -

        As I said earlier, I think the only way to keep one would be to force the querying of the primary index clause column name. In some cases, when we already do a NameQuery, either as part of the first data query or because we need a query for the extraFilter, this won't be a big deal. If it's a slice query and the primary index clause name is part of the return, we're good to. But otherwise, we'll have to do a specific query to validate the assert. Maybe the cases where we'll have to do an extra query are considered low enough than we think it's worth. But then there is the other problem.

        The other problem is that this assertion is not thread safe, because the query to the index and the data is not atomic.

        Show
        Sylvain Lebresne added a comment - As I said earlier, I think the only way to keep one would be to force the querying of the primary index clause column name. In some cases, when we already do a NameQuery, either as part of the first data query or because we need a query for the extraFilter, this won't be a big deal. If it's a slice query and the primary index clause name is part of the return, we're good to. But otherwise, we'll have to do a specific query to validate the assert. Maybe the cases where we'll have to do an extra query are considered low enough than we think it's worth. But then there is the other problem. The other problem is that this assertion is not thread safe, because the query to the index and the data is not atomic.
        Hide
        Jonathan Ellis added a comment -

        I think the only way to keep one would be to force the querying of the primary index clause column name... but this will have a performance impact

        I think we should take the impact. (The common query that we want to be fast is name-based and this won't affect that.)

        Show
        Jonathan Ellis added a comment - I think the only way to keep one would be to force the querying of the primary index clause column name... but this will have a performance impact I think we should take the impact. (The common query that we want to be fast is name-based and this won't affect that.)
        Hide
        Sylvain Lebresne added a comment -

        I actually agree with taking the impact. Especially given that there is actually very little cases where it will make an actual difference anyway.

        Attaching patch (2653_v2, based on 0.7) that implement the idea and add back the sanity check.

        Show
        Sylvain Lebresne added a comment - I actually agree with taking the impact. Especially given that there is actually very little cases where it will make an actual difference anyway. Attaching patch (2653_v2, based on 0.7) that implement the idea and add back the sanity check.
        Sylvain Lebresne made changes -
        Attachment 2653_v2.patch [ 12483978 ]
        Hide
        Jonathan Ellis added a comment -

        doesn't this assert still have the "the query to the index and the data is not atomic" problem?

        Show
        Jonathan Ellis added a comment - doesn't this assert still have the "the query to the index and the data is not atomic" problem?
        Sylvain Lebresne made changes -
        Attachment 2653_v3.patch [ 12484407 ]
        Hide
        Sylvain Lebresne added a comment -

        doesn't this assert still have the "the query to the index and the data is not atomic" problem?

        No you're right, I focused on adding back the assert forgetting it wasn't safe in the first place. Attaching v3 based on v2, but instead of asserting that the row return contains the primary clause column, it skips the row if it doesn't contain it. That is, instead of asserting the non-corruption of the index, it ignores any possible corruption. But more importantly (one could hope we don't have a bug that corrupt indexes), it will avoid returning incoherent result to the user in the event of a race between reads and writes.

        Trying to prevent the race from happening would require synchronization with write, which will be much harder and less efficient. And we probably need to have a fix for that out sooner than later (both the error when zero columns are requested and the possibly to throw assertion errors wrongly).

        In the longer term, I think we should explore the possibility of stopping to care whether our secondary indexes are coherent at all time and repair them at read time as this may allow us to get rid of the read-before-write. But it's a longer term goal at best and work for another ticket.

        Show
        Sylvain Lebresne added a comment - doesn't this assert still have the "the query to the index and the data is not atomic" problem? No you're right, I focused on adding back the assert forgetting it wasn't safe in the first place. Attaching v3 based on v2, but instead of asserting that the row return contains the primary clause column, it skips the row if it doesn't contain it. That is, instead of asserting the non-corruption of the index, it ignores any possible corruption. But more importantly (one could hope we don't have a bug that corrupt indexes), it will avoid returning incoherent result to the user in the event of a race between reads and writes. Trying to prevent the race from happening would require synchronization with write, which will be much harder and less efficient. And we probably need to have a fix for that out sooner than later (both the error when zero columns are requested and the possibly to throw assertion errors wrongly). In the longer term, I think we should explore the possibility of stopping to care whether our secondary indexes are coherent at all time and repair them at read time as this may allow us to get rid of the read-before-write. But it's a longer term goal at best and work for another ticket.
        Hide
        Jonathan Ellis added a comment -

        +1

        Show
        Jonathan Ellis added a comment - +1
        Jonathan Ellis made changes -
        Fix Version/s 0.8.2 [ 12316645 ]
        Fix Version/s 0.8.1 [ 12316368 ]
        Hide
        Hudson added a comment -

        Integrated in Cassandra-0.7 #517 (See https://builds.apache.org/job/Cassandra-0.7/517/)
        Fix scan wrongly throwing assertion errors
        patch by slebresne; reviewed by jbellis for CASSANDRA-2653

        slebresne : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1141129
        Files :

        • /cassandra/branches/cassandra-0.7/CHANGES.txt
        • /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
        Show
        Hudson added a comment - Integrated in Cassandra-0.7 #517 (See https://builds.apache.org/job/Cassandra-0.7/517/ ) Fix scan wrongly throwing assertion errors patch by slebresne; reviewed by jbellis for CASSANDRA-2653 slebresne : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1141129 Files : /cassandra/branches/cassandra-0.7/CHANGES.txt /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
        Hide
        Sylvain Lebresne added a comment -

        Actually, after having committed it, I realize there is a few issue with the previous patch. Two mostly:

        1. If the extraFilter query finds nothing (which it will only in case of the race between write and reads), getColumnFamily() will return null and the data.addAll() will NPE
        2. For 0.8 and for counters, we must make really sure that this extra query won't add column that were returned by the first query (which can happen in the current code), otherwise we'll overcount. I think this is actually a bug that predate the fix for this.

        Anyway, attaching 0001-Fix-scan-issue that fixes both of those issue. It also add a slight optimization that avoids doing extra work if we know an extra query won't help.

        Show
        Sylvain Lebresne added a comment - Actually, after having committed it, I realize there is a few issue with the previous patch. Two mostly: If the extraFilter query finds nothing (which it will only in case of the race between write and reads), getColumnFamily() will return null and the data.addAll() will NPE For 0.8 and for counters, we must make really sure that this extra query won't add column that were returned by the first query (which can happen in the current code), otherwise we'll overcount. I think this is actually a bug that predate the fix for this. Anyway, attaching 0001-Fix-scan-issue that fixes both of those issue. It also add a slight optimization that avoids doing extra work if we know an extra query won't help.
        Sylvain Lebresne made changes -
        Attachment 0001-Fix-scan-issue.patch [ 12484669 ]
        Sylvain Lebresne made changes -
        Status Reopened [ 4 ] Patch Available [ 10002 ]
        Hide
        Jonathan Ellis added a comment -

        +1

        Show
        Jonathan Ellis added a comment - +1
        Hide
        Sylvain Lebresne added a comment -

        Committed, thanks

        Show
        Sylvain Lebresne added a comment - Committed, thanks
        Sylvain Lebresne made changes -
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Resolution Fixed [ 1 ]
        Gavin made changes -
        Workflow no-reopen-closed, patch-avail [ 12613509 ] patch-available, re-open possible [ 12752832 ]
        Gavin made changes -
        Workflow patch-available, re-open possible [ 12752832 ] reopen-resolved, no closed status, patch-avail, testing [ 12758482 ]

          People

          • Assignee:
            Sylvain Lebresne
            Reporter:
            Jonathan Ellis
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development