Lucene - Core
  1. Lucene - Core
  2. LUCENE-3348

IndexWriter applies wrong deletes during concurrent flush-all

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.0-ALPHA
    • Component/s: core/index
    • Labels:
      None
    • Lucene Fields:
      New, Patch Available

      Description

      Yonik uncovered this with the TestRealTimeGet test: if a flush-all is
      underway, it is possible for an incoming update to pick a DWPT that is
      stale, ie, not yet pulled/marked for flushing, yet the DW has cutover
      to a new deletes queue. If this happens, and the deleted term was
      also updated in one of the non-stale DWPTs, then the wrong document is
      deleted and the test fails by detecting the wrong value.

      There's a 2nd failure mode that I haven't figured out yet, whereby 2
      docs are returned when searching by id (there should only ever be 1
      doc since the test uses updateDocument which is atomic wrt
      commit/reopen).

      Yonik verified the test passes pre-DWPT, so my guess is (but I
      have yet to verify) this test also passes on 3.x. I'll backport
      the test to 3.x to be sure.

      1. LUCENE-3348.patch
        56 kB
        Simon Willnauer
      2. LUCENE-3348.patch
        58 kB
        Michael McCandless
      3. fail2.txt.bz2
        129 kB
        Michael McCandless
      4. LUCENE-3348.patch
        56 kB
        Michael McCandless
      5. fail.txt.bz2
        833 kB
        Michael McCandless
      6. LUCENE-3348.patch
        54 kB
        Michael McCandless
      7. LUCENE-3348.patch
        48 kB
        Michael McCandless
      8. LUCENE-3348.patch
        31 kB
        Michael McCandless

        Activity

        Hide
        Michael McCandless added a comment -

        Initial patch, tons of nocommits still but tests pass.

        I moved the Lucene-only test over to oal.index, and added VERBOSE
        prints.

        I made an initial possible fix for the first failure, which seems to
        work (I don't seem to hit that failure anymore). I'm not sure I like
        the fix... basically, after pulling the DWPT for indexing, I check if
        it's stale, and if so call a new method in DWFlushControl to move that
        DWPT into the toFlush list. I think it'd be better to somehow, up
        front in flush-all, mark all current DWPTs as stale, pull them out of
        rotation, so that the thread pool would never return such a stale
        DWPT.

        Still trying to understand the 2nd failure...

        Show
        Michael McCandless added a comment - Initial patch, tons of nocommits still but tests pass. I moved the Lucene-only test over to oal.index, and added VERBOSE prints. I made an initial possible fix for the first failure, which seems to work (I don't seem to hit that failure anymore). I'm not sure I like the fix... basically, after pulling the DWPT for indexing, I check if it's stale, and if so call a new method in DWFlushControl to move that DWPT into the toFlush list. I think it'd be better to somehow, up front in flush-all, mark all current DWPTs as stale, pull them out of rotation, so that the thread pool would never return such a stale DWPT. Still trying to understand the 2nd failure...
        Hide
        Simon Willnauer added a comment -

        mike, patch looks good. one little think, you should check if the DWPT is already pending before calling #setFlushPending(DWPT).

        I think it'd be better to somehow, up
        front in flush-all, mark all current DWPTs as stale, pull them out of
        rotation, so that the thread pool would never return such a stale
        DWPT.

        the problem here is that you need to lock all the states that are selected for flushing. at the same time an indexing thread could lock such a DWPT and index a document which causes the problem this issues tries to solve. If you sync the thread pools getAndLock method this could work but in the non-blocking approach I think this is the only way to prevent this.

        Show
        Simon Willnauer added a comment - mike, patch looks good. one little think, you should check if the DWPT is already pending before calling #setFlushPending(DWPT). I think it'd be better to somehow, up front in flush-all, mark all current DWPTs as stale, pull them out of rotation, so that the thread pool would never return such a stale DWPT. the problem here is that you need to lock all the states that are selected for flushing. at the same time an indexing thread could lock such a DWPT and index a document which causes the problem this issues tries to solve. If you sync the thread pools getAndLock method this could work but in the non-blocking approach I think this is the only way to prevent this.
        Hide
        Michael McCandless added a comment -

        OK I'll make sure it's not already pending.

        Show
        Michael McCandless added a comment - OK I'll make sure it's not already pending.
        Hide
        Michael McCandless added a comment -

        The 2nd bug seems to be because a commit() is running concurrently with a getReader(), and the flush-all being done for the getReader() is making a newly flushed segment visible to the SegmentInfos just before commit clones the SegmentInfos, and the buffered deletes have not been fully processed yet at the point for that new segment (and for segments before it).

        You can see it in IW.prepareCommit – we call flush(true, true) and then startCommit w/o any sync, so in there a concurrent getReader can sneak in a change to the segmentInfos so that an updateDocument appears non-atomic.

        Show
        Michael McCandless added a comment - The 2nd bug seems to be because a commit() is running concurrently with a getReader(), and the flush-all being done for the getReader() is making a newly flushed segment visible to the SegmentInfos just before commit clones the SegmentInfos, and the buffered deletes have not been fully processed yet at the point for that new segment (and for segments before it). You can see it in IW.prepareCommit – we call flush(true, true) and then startCommit w/o any sync, so in there a concurrent getReader can sneak in a change to the segmentInfos so that an updateDocument appears non-atomic.
        Hide
        Jason Rutherglen added a comment -

        Sorry to add my opinion to this, however I think that while non-blocking deletes are quite fancy, it seems they are open to various bugs such as this. Is there a compelling reason non-locking is used, eg, performance?

        Show
        Jason Rutherglen added a comment - Sorry to add my opinion to this, however I think that while non-blocking deletes are quite fancy, it seems they are open to various bugs such as this. Is there a compelling reason non-locking is used, eg, performance?
        Hide
        Simon Willnauer added a comment -

        Sorry to add my opinion to this, however I think that while non-blocking deletes are quite fancy, it seems they are open to various bugs such as this. Is there a compelling reason non-locking is used, eg, performance?

        Jason, this issue is unrelated to non-blocking deletes. The bug here is in concurrent flush which is indeed the main performance factor in DWPT.

        Show
        Simon Willnauer added a comment - Sorry to add my opinion to this, however I think that while non-blocking deletes are quite fancy, it seems they are open to various bugs such as this. Is there a compelling reason non-locking is used, eg, performance? Jason, this issue is unrelated to non-blocking deletes. The bug here is in concurrent flush which is indeed the main performance factor in DWPT.
        Hide
        Michael McCandless added a comment -

        Another patch, I think fixing the 2nd issue by doing a custom flush
        inside prepareCommit, that clones & incRefs the flushed SegmentInfos
        inside a sync block so that we get a consistent point-in-time commit.

        I also fixed a test deadlock caused by the fix for the first issue –
        have to handle the case where the DWPT is null because close is in
        process.

        I also evil'd up the TestStressNRT by randomizing everything, and
        fixed RIW to sometimes pull an NRT reader after doing a commit just to
        mix that up.

        The test seems to pass now; I think it's ready to commit... but I'll let it beast a while more...

        Show
        Michael McCandless added a comment - Another patch, I think fixing the 2nd issue by doing a custom flush inside prepareCommit, that clones & incRefs the flushed SegmentInfos inside a sync block so that we get a consistent point-in-time commit. I also fixed a test deadlock caused by the fix for the first issue – have to handle the case where the DWPT is null because close is in process. I also evil'd up the TestStressNRT by randomizing everything, and fixed RIW to sometimes pull an NRT reader after doing a commit just to mix that up. The test seems to pass now; I think it's ready to commit... but I'll let it beast a while more...
        Hide
        Simon Willnauer added a comment -

        mike, patch looks good. some thoughts:

        • can we factor out the while(true) {.. getAndLock(thread, DW) .. }

          to prevent this code duplication?

        • you throw a NPE if the DWPT is null, yet this is handled by ThreadState#isActive() and calls ensureOpen() to throw consistent exception when IW is closed like further down you see:
          if (!perThread.isActive()) {
            ensureOpen();
            assert false: "perThread is not active but we are still open";
          }
          

        I think this will also solve the deadlock issue you describing above, no?

        thanks for taking care of this, another proof that concurrency is not easy

        Show
        Simon Willnauer added a comment - mike, patch looks good. some thoughts: can we factor out the while(true) {.. getAndLock(thread, DW) .. } to prevent this code duplication? you throw a NPE if the DWPT is null, yet this is handled by ThreadState#isActive() and calls ensureOpen() to throw consistent exception when IW is closed like further down you see: if (!perThread.isActive()) { ensureOpen(); assert false : "perThread is not active but we are still open" ; } I think this will also solve the deadlock issue you describing above, no? thanks for taking care of this, another proof that concurrency is not easy
        Hide
        Michael McCandless added a comment -

        Thanks Simon; I'll make both of those fixes.

        Unfortunately there is still at least one more thread safety issue that I'm trying to track down... beasting uncovered a good seed.

        Show
        Michael McCandless added a comment - Thanks Simon; I'll make both of those fixes. Unfortunately there is still at least one more thread safety issue that I'm trying to track down... beasting uncovered a good seed.
        Hide
        Simon Willnauer added a comment -

        Unfortunately there is still at least one more thread safety issue that I'm trying to track down... beasting uncovered a good seed.

        argh! can you post it here?

        simon

        Show
        Simon Willnauer added a comment - Unfortunately there is still at least one more thread safety issue that I'm trying to track down... beasting uncovered a good seed. argh! can you post it here? simon
        Hide
        Michael McCandless added a comment -

        Current patch, but still at least another concurrency issue.

        Show
        Michael McCandless added a comment - Current patch, but still at least another concurrency issue.
        Hide
        Michael McCandless added a comment -

        Here's what I run with the while1 tester in luceneutil: TestStressNRT -iters 3 -verbose -seed -6208047570437556381:-3138230871915238634

        I think what's special about the seed is maxBufferedDocs is 3, so we are doing tons of segment flushing. I dumbed back the test somewhat (turned off merging entirely, only 1 reader thread, up to 5 writer threads, and it still fails.

        Show
        Michael McCandless added a comment - Here's what I run with the while1 tester in luceneutil: TestStressNRT -iters 3 -verbose -seed -6208047570437556381:-3138230871915238634 I think what's special about the seed is maxBufferedDocs is 3, so we are doing tons of segment flushing. I dumbed back the test somewhat (turned off merging entirely, only 1 reader thread, up to 5 writer threads, and it still fails.
        Hide
        Simon Willnauer added a comment -

        mike I can not reproduce this failure.. what exactly is failing there? maybe you can put the output in a text file and attache it?

        Regarding the latest patch, I think we can call DWFlushControl#addFlushableState() from DWFlushControl#markForFullFlush() and use a global list to collect the DWPT for the full flush.

        I think we should move the getAndLock call into DWFlushControl something like DWFlushControl#obtainAndLock(), this would allow us to make the check and the DWFlushControl#addFlushableState() method private to DWFC. Further we can also simplify the deleteQueue check a little since we already obtained a ThreadState we don't need to unlock the state again after calling addFlushableState(), something like this:

        ThreadState obtainAndLock() {
            final ThreadState perThread = perThreadPool.getAndLock(Thread
                .currentThread(), documentsWriter);
            if (perThread.isActive()
                && perThread.perThread.deleteQueue != documentsWriter.deleteQueue) {
              // There is a flush-all in process and this DWPT is
              // now stale -- enroll it for flush and try for
              // another DWPT:
              addFlushableState(perThread);
            }
            return perThread;
          }
        

        Eventually we are spending too much time in full flush since we lock all ThreadStates at least once while some indexing threads might have already helped out with swapping out DWPT instances. I think we can collect already swapped out ThreadStates during a full flush and only check the ones that have not been processed?

        Show
        Simon Willnauer added a comment - mike I can not reproduce this failure.. what exactly is failing there? maybe you can put the output in a text file and attache it? Regarding the latest patch, I think we can call DWFlushControl#addFlushableState() from DWFlushControl#markForFullFlush() and use a global list to collect the DWPT for the full flush. I think we should move the getAndLock call into DWFlushControl something like DWFlushControl#obtainAndLock(), this would allow us to make the check and the DWFlushControl#addFlushableState() method private to DWFC. Further we can also simplify the deleteQueue check a little since we already obtained a ThreadState we don't need to unlock the state again after calling addFlushableState(), something like this: ThreadState obtainAndLock() { final ThreadState perThread = perThreadPool.getAndLock( Thread .currentThread(), documentsWriter); if (perThread.isActive() && perThread.perThread.deleteQueue != documentsWriter.deleteQueue) { // There is a flush-all in process and this DWPT is // now stale -- enroll it for flush and try for // another DWPT: addFlushableState(perThread); } return perThread; } Eventually we are spending too much time in full flush since we lock all ThreadStates at least once while some indexing threads might have already helped out with swapping out DWPT instances. I think we can collect already swapped out ThreadStates during a full flush and only check the ones that have not been processed?
        Hide
        Michael McCandless added a comment -

        Full output from a failure.

        Show
        Michael McCandless added a comment - Full output from a failure.
        Hide
        Michael McCandless added a comment -

        OK I attached output of a failure – it's 400K lines. Search for the AssertionError, where id:26 couldn't find a doc nor tombstone.

        Show
        Michael McCandless added a comment - OK I attached output of a failure – it's 400K lines. Search for the AssertionError, where id:26 couldn't find a doc nor tombstone.
        Hide
        Michael McCandless added a comment -

        Simon found one case that could result in a delete becoming visible before a previous updateDocument. I made that fix (to DW.applyAllDeletes) but unfortunately there's still a failure (see fail2.txt.bz2).

        Show
        Michael McCandless added a comment - Simon found one case that could result in a delete becoming visible before a previous updateDocument. I made that fix (to DW.applyAllDeletes) but unfortunately there's still a failure (see fail2.txt.bz2).
        Hide
        Simon Willnauer added a comment -

        I think I now know what is causing the failure here. In IW#prepareCommit(Map) we release the full flush (docWriter.finishFullFlush(success) before we apply the deletes. This means that another thread can start flushing and freeze & push its global deletes into the BufferedDeleteStream before we call IW#maybeApplyDeletes(). if a flush is fast enough (small segment) and something else causes the committing thread to wait on the IW in order to apply the deletes a del-packet could sneak in not belonging to the commit. I IW#getReader this is already handled correctly.

        Show
        Simon Willnauer added a comment - I think I now know what is causing the failure here. In IW#prepareCommit(Map) we release the full flush (docWriter.finishFullFlush(success) before we apply the deletes. This means that another thread can start flushing and freeze & push its global deletes into the BufferedDeleteStream before we call IW#maybeApplyDeletes(). if a flush is fast enough (small segment) and something else causes the committing thread to wait on the IW in order to apply the deletes a del-packet could sneak in not belonging to the commit. I IW#getReader this is already handled correctly.
        Hide
        Michael McCandless added a comment -

        I think you are right! I will fix prepareCommit to match getReader and re-beast.

        Show
        Michael McCandless added a comment - I think you are right! I will fix prepareCommit to match getReader and re-beast.
        Hide
        Michael McCandless added a comment -

        Patch, incorporating Simon's last suggestion. I think this fixes the concurrency bugs – beasting for 2703 iterations so far and no failure!

        Not quite committable – lots of added SOPs. I'll be out next week so won't get to this until I'm back so feel free to clean it up and commit!

        Show
        Michael McCandless added a comment - Patch, incorporating Simon's last suggestion. I think this fixes the concurrency bugs – beasting for 2703 iterations so far and no failure! Not quite committable – lots of added SOPs. I'll be out next week so won't get to this until I'm back so feel free to clean it up and commit!
        Hide
        Simon Willnauer added a comment -

        Patch, incorporating Simon's last suggestion. I think this fixes the concurrency bugs – beasting for 2703 iterations so far and no failure!

        awesome!

        Not quite committable – lots of added SOPs. I'll be out next week so won't get to this until I'm back so feel free to clean it up and commit!

        Mike, I will assign this to me and get it committable next week.

        Thanks, have a good time

        Show
        Simon Willnauer added a comment - Patch, incorporating Simon's last suggestion. I think this fixes the concurrency bugs – beasting for 2703 iterations so far and no failure! awesome! Not quite committable – lots of added SOPs. I'll be out next week so won't get to this until I'm back so feel free to clean it up and commit! Mike, I will assign this to me and get it committable next week. Thanks, have a good time
        Hide
        Michael McCandless added a comment -

        Thanks Simon! Also I didn't implement your suggestion above (putting the new code into DWTC.obtainAndLock), but I think we should!

        Show
        Michael McCandless added a comment - Thanks Simon! Also I didn't implement your suggestion above (putting the new code into DWTC.obtainAndLock), but I think we should!
        Hide
        Simon Willnauer added a comment -

        here is a cleaned up patch with all the fixes. I beasted the seed for 3k times no failure and run 5k random iteration without a failure. I think we are good to go here.

        Show
        Simon Willnauer added a comment - here is a cleaned up patch with all the fixes. I beasted the seed for 3k times no failure and run 5k random iteration without a failure. I think we are good to go here.
        Hide
        Yonik Seeley added a comment -

        Tricky stuff... great job of tracking all these concurrency issues down!
        I tweaked the test (more variability in number of threads, etc) and it's been running for 2 hours w/ no failures.

        Show
        Yonik Seeley added a comment - Tricky stuff... great job of tracking all these concurrency issues down! I tweaked the test (more variability in number of threads, etc) and it's been running for 2 hours w/ no failures.
        Hide
        Simon Willnauer added a comment -

        I am planning to commit this tomorrow if nobody objects

        Show
        Simon Willnauer added a comment - I am planning to commit this tomorrow if nobody objects
        Hide
        Mark Miller added a comment -

        +1 - thanks for ferreting these concurrency issues out!

        Show
        Mark Miller added a comment - +1 - thanks for ferreting these concurrency issues out!
        Hide
        Simon Willnauer added a comment -

        Committed to trunk in revision 1155278.
        I backported the test to 3.x and it failed. Maybe something is wrong with the test though, I will dig! Here is the failure:

            [junit] ------------- Standard Output ---------------
            [junit] FAIL: hits id:34 val=-38
            [junit]   docID=43 id:-34 foundVal=38
            [junit] READER3: FAILED: unexpected exception
            [junit] java.lang.AssertionError: id=34 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2
            [junit] 	at org.junit.Assert.fail(Assert.java:91)
            [junit] 	at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:345)
            [junit] FAIL: hits id:25 val=39
            [junit]   docID=24 id:25 foundVal=39
            [junit]   docID=85 id:25 foundVal=43
            [junit] READER1: FAILED: unexpected exception
            [junit] java.lang.AssertionError: id=25 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2
            [junit] 	at org.junit.Assert.fail(Assert.java:91)
            [junit] 	at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:345)
            [junit] ------------- ---------------- ---------------
            [junit] ------------- Standard Error -----------------
            [junit] NOTE: reproduce with: ant test -Dtestcase=TestStressNRT -Dtestmethod=test -Dtests.seed=-78c35b20c01ed2f8:-292d76adf99900e2:3f7c8696906a10c7
            [junit] NOTE: reproduce with: ant test -Dtestcase=TestStressNRT -Dtestmethod=test -Dtests.seed=-78c35b20c01ed2f8:-292d76adf99900e2:3f7c8696906a10c7
            [junit] The following exceptions were thrown by threads:
            [junit] *** Thread: READER3 ***
            [junit] java.lang.RuntimeException: java.lang.AssertionError: id=34 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2
            [junit] 	at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:360)
            [junit] Caused by: java.lang.AssertionError: id=34 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2
            [junit] 	at org.junit.Assert.fail(Assert.java:91)
            [junit] 	at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:345)
            [junit] *** Thread: READER1 ***
            [junit] java.lang.RuntimeException: java.lang.AssertionError: id=25 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2
            [junit] 	at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:360)
            [junit] Caused by: java.lang.AssertionError: id=25 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2
            [junit] 	at org.junit.Assert.fail(Assert.java:91)
            [junit] 	at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:345)
            [junit] NOTE: test params are: locale=fr_BE, timezone=EET
            [junit] NOTE: all tests run in this JVM:
            [junit] [TestCharFilter, TestClassicAnalyzer, TestKeywordAnalyzer, TestStandardAnalyzer, TestBinaryDocument, TestAtomicUpdate, TestConcurrentMergeScheduler, TestDeletionPolicy, TestDirectoryReader, TestDoc, TestLazyProxSkipping, TestMultiLevelSkipList, TestPerSegmentDeletes, TestSameTokenSamePosition, TestStressNRT]
            [junit] NOTE: Linux 2.6.35-30-generic amd64/Sun Microsystems Inc. 1.6.0_26 (64-bit)/cpus=12,threads=1,free=286656336,total=352714752
            [junit] ------------- ---------------- ---------------
        
        
        Show
        Simon Willnauer added a comment - Committed to trunk in revision 1155278. I backported the test to 3.x and it failed. Maybe something is wrong with the test though, I will dig! Here is the failure: [junit] ------------- Standard Output --------------- [junit] FAIL: hits id:34 val=-38 [junit] docID=43 id:-34 foundVal=38 [junit] READER3: FAILED: unexpected exception [junit] java.lang.AssertionError: id=34 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2 [junit] at org.junit.Assert.fail(Assert.java:91) [junit] at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:345) [junit] FAIL: hits id:25 val=39 [junit] docID=24 id:25 foundVal=39 [junit] docID=85 id:25 foundVal=43 [junit] READER1: FAILED: unexpected exception [junit] java.lang.AssertionError: id=25 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2 [junit] at org.junit.Assert.fail(Assert.java:91) [junit] at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:345) [junit] ------------- ---------------- --------------- [junit] ------------- Standard Error ----------------- [junit] NOTE: reproduce with: ant test -Dtestcase=TestStressNRT -Dtestmethod=test -Dtests.seed=-78c35b20c01ed2f8:-292d76adf99900e2:3f7c8696906a10c7 [junit] NOTE: reproduce with: ant test -Dtestcase=TestStressNRT -Dtestmethod=test -Dtests.seed=-78c35b20c01ed2f8:-292d76adf99900e2:3f7c8696906a10c7 [junit] The following exceptions were thrown by threads: [junit] *** Thread: READER3 *** [junit] java.lang.RuntimeException: java.lang.AssertionError: id=34 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2 [junit] at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:360) [junit] Caused by: java.lang.AssertionError: id=34 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2 [junit] at org.junit.Assert.fail(Assert.java:91) [junit] at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:345) [junit] *** Thread: READER1 *** [junit] java.lang.RuntimeException: java.lang.AssertionError: id=25 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2 [junit] at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:360) [junit] Caused by: java.lang.AssertionError: id=25 reader=ReadOnlyDirectoryReader(segments_q _2l(3.4):cv62/13 _2p(3.4):Cv6 _2o(3.4):cv47) totalHits=2 [junit] at org.junit.Assert.fail(Assert.java:91) [junit] at org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:345) [junit] NOTE: test params are: locale=fr_BE, timezone=EET [junit] NOTE: all tests run in this JVM: [junit] [TestCharFilter, TestClassicAnalyzer, TestKeywordAnalyzer, TestStandardAnalyzer, TestBinaryDocument, TestAtomicUpdate, TestConcurrentMergeScheduler, TestDeletionPolicy, TestDirectoryReader, TestDoc, TestLazyProxSkipping, TestMultiLevelSkipList, TestPerSegmentDeletes, TestSameTokenSamePosition, TestStressNRT] [junit] NOTE: Linux 2.6.35-30-generic amd64/Sun Microsystems Inc. 1.6.0_26 (64-bit)/cpus=12,threads=1,free=286656336,total=352714752 [junit] ------------- ---------------- ---------------
        Hide
        Simon Willnauer added a comment -

        FYI - I opened LUCENE-3368 to track the failures in 3.x and backported the test together with the fix.

        Show
        Simon Willnauer added a comment - FYI - I opened LUCENE-3368 to track the failures in 3.x and backported the test together with the fix.

          People

          • Assignee:
            Simon Willnauer
            Reporter:
            Michael McCandless
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development