Lucene - Core
  1. Lucene - Core
  2. LUCENE-2618

Intermittent failure in TestThreadedOptimize

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.9.4, 3.0.3, 3.1, 4.0-ALPHA
    • Component/s: core/index
    • Labels:
      None
    • Lucene Fields:
      New

      Description

      Failure looks like this:

          [junit] Testsuite: org.apache.lucene.index.TestThreadedOptimize
          [junit] Testcase: testThreadedOptimize(org.apache.lucene.index.TestThreadedOptimize):	FAILED
          [junit] null
          [junit] junit.framework.AssertionFailedError: null
          [junit] 	at org.apache.lucene.index.TestThreadedOptimize.runTest(TestThreadedOptimize.java:125)
          [junit] 	at org.apache.lucene.index.TestThreadedOptimize.testThreadedOptimize(TestThreadedOptimize.java:149)
          [junit] 	at org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:253)
      

      I just committed some verbosity so next time it strikes we'll have more details.

      1. LUCENE-2618.patch
        66 kB
        Michael McCandless
      2. LUCENE-2618.patch
        2 kB
        Michael McCandless
      3. LUCENE-2618.patch
        0.8 kB
        Michael McCandless

        Activity

        Hide
        Michael McCandless added a comment -

        Jason, I did commit to trunk (I edited issue summary to remove "3.x backward" since the issue happened everywhere.

        Show
        Michael McCandless added a comment - Jason, I did commit to trunk (I edited issue summary to remove "3.x backward" since the issue happened everywhere.
        Hide
        Jason Rutherglen added a comment -

        Are we going to fix this in trunk as well?

        Show
        Jason Rutherglen added a comment - Are we going to fix this in trunk as well?
        Hide
        Michael McCandless added a comment -

        Patch.

        I think I found this – it's a thread safety issue, that happens when a "normal" merge is kicking off at the same time that another thread calls optimize.

        In this case it's possible that merge fails to mark itself as an optimizing merge, which means any merges that cascade from it will also fail to be optimized.

        I also modified MockDirWrapper to randomly call Thread.yield to see if we can tease out any more thread bugs.

        Show
        Michael McCandless added a comment - Patch. I think I found this – it's a thread safety issue, that happens when a "normal" merge is kicking off at the same time that another thread calls optimize. In this case it's possible that merge fails to mark itself as an optimizing merge, which means any merges that cascade from it will also fail to be optimized. I also modified MockDirWrapper to randomly call Thread.yield to see if we can tease out any more thread bugs.
        Hide
        Michael McCandless added a comment -

        Ugh – last night's 3.x build just failed again! So this was not the [only] cause. Hmm. I'll leave this reopened....

        Show
        Michael McCandless added a comment - Ugh – last night's 3.x build just failed again! So this was not the [only] cause. Hmm. I'll leave this reopened....
        Hide
        Shai Erera added a comment -

        Thanks Mike.

        Show
        Shai Erera added a comment - Thanks Mike.
        Hide
        Michael McCandless added a comment -

        OK I opened LUCENE-2720.

        Show
        Michael McCandless added a comment - OK I opened LUCENE-2720 .
        Hide
        Michael McCandless added a comment -

        We should fix the code to throw the exception immediately. Is there a way to check a Directory if it's old or not?

        I agree – IW.open should fail immediately if any of the segments are too old.

        Unfortunately, I don't see a simple way to do this. We can't just look at the version of the segments_N file, for example, because one segment could be from 2.9, and [say] 3.1 had last opened the index and written the 3.x file format for segments_N. See, IW does not go and open all SegmentReaders on open. It's only on merge, applying deletes, or opening an NRT reader, that we go and open segments for reading.

        I think to do this correctly we should modify segments_N format to record the oldest segment in the index? Then IW can check this easily on open.

        I don't mind if you continue w/ the fix to the test as you did, but IMO it just hides the real problem. I.e., allowing all merges caused by optimize() to finish is a correct fix.

        I agree.

        There is already a pre-existing TODO in the test stating that we should fix IW to throw this exc on open. I'll also add a TODO to IW's ctor and go open an issue...

        Show
        Michael McCandless added a comment - We should fix the code to throw the exception immediately. Is there a way to check a Directory if it's old or not? I agree – IW.open should fail immediately if any of the segments are too old. Unfortunately, I don't see a simple way to do this. We can't just look at the version of the segments_N file, for example, because one segment could be from 2.9, and [say] 3.1 had last opened the index and written the 3.x file format for segments_N. See, IW does not go and open all SegmentReaders on open. It's only on merge, applying deletes, or opening an NRT reader, that we go and open segments for reading. I think to do this correctly we should modify segments_N format to record the oldest segment in the index? Then IW can check this easily on open. I don't mind if you continue w/ the fix to the test as you did, but IMO it just hides the real problem. I.e., allowing all merges caused by optimize() to finish is a correct fix. I agree. There is already a pre-existing TODO in the test stating that we should fix IW to throw this exc on open. I'll also add a TODO to IW's ctor and go open an issue...
        Hide
        Shai Erera added a comment -

        OK Mike .I understood the sequence of operations that led to this exception before. What didn't add up is why is it thrown during optimize, and not say up front when IW is opened, or when the Directory was added through addIndexes.

        We should fix the code to throw the exception immediately. Is there a way to check a Directory if it's old or not? If not, such exception could really throw you off your chair, when you hit it at a point in time not remotely related to when it was added to the index.

        I don't mind if you continue w/ the fix to the test as you did, but IMO it just hides the real problem. I.e., allowing all merges caused by optimize() to finish is a correct fix. But catching that exception upon IW.close() is a bad one IMO - people who read the code learn how to use Lucene, and catching that exception on close() makes absolutely no sense, at least to me. Could you plz add a TODO there to get rid of that code when we fix IW to detect old indexes up front? That way, if someone reads the code, he'll at least understand that this is a temporary solution.

        Show
        Shai Erera added a comment - OK Mike .I understood the sequence of operations that led to this exception before. What didn't add up is why is it thrown during optimize, and not say up front when IW is opened, or when the Directory was added through addIndexes. We should fix the code to throw the exception immediately. Is there a way to check a Directory if it's old or not? If not, such exception could really throw you off your chair, when you hit it at a point in time not remotely related to when it was added to the index. I don't mind if you continue w/ the fix to the test as you did, but IMO it just hides the real problem. I.e., allowing all merges caused by optimize() to finish is a correct fix. But catching that exception upon IW.close() is a bad one IMO - people who read the code learn how to use Lucene, and catching that exception on close() makes absolutely no sense, at least to me. Could you plz add a TODO there to get rid of that code when we fix IW to detect old indexes up front? That way, if someone reads the code, he'll at least understand that this is a temporary solution.
        Hide
        Michael McCandless added a comment -

        I think there's a separate issue open (Uwe?) to have IW immediately throw this exc on open, instead of during optimize/close.

        Show
        Michael McCandless added a comment - I think there's a separate issue open (Uwe?) to have IW immediately throw this exc on open, instead of during optimize/close.
        Hide
        Michael McCandless added a comment -

        If my app knowingly opened a too old index, would it get this exception always, if it will call optimize followed by close? Or is it a special scenario hit by the test?

        Not always. It's only if the MP registered more than 1 merge for the optimize, and, you're using SMS.

        But, really if your app has risk of opening a too-old index, it should be prepared for this exc...

        namely, what's the connection between optimize + close and an old index?

        MP enrolled 2 merges for the optimize... the first one hits exc... then test calls close... and close lets MS run... and MS is SMS... and it runs the 2nd merge, which this the exc.

        Show
        Michael McCandless added a comment - If my app knowingly opened a too old index, would it get this exception always, if it will call optimize followed by close? Or is it a special scenario hit by the test? Not always. It's only if the MP registered more than 1 merge for the optimize, and, you're using SMS. But, really if your app has risk of opening a too-old index, it should be prepared for this exc... namely, what's the connection between optimize + close and an old index? MP enrolled 2 merges for the optimize... the first one hits exc... then test calls close... and close lets MS run... and MS is SMS... and it runs the 2nd merge, which this the exc.
        Hide
        Shai Erera added a comment -

        I see. It's just that you describe this exception as being thrown because close is called while optimize was running over an old index - but I don't understand why it has to be thrown in this case - namely, what's the connection between optimize + close and an old index? If my app knowingly opened a too old index, would it get this exception always, if it will call optimize followed by close? Or is it a special scenario hit by the test?

        Show
        Shai Erera added a comment - I see. It's just that you describe this exception as being thrown because close is called while optimize was running over an old index - but I don't understand why it has to be thrown in this case - namely, what's the connection between optimize + close and an old index? If my app knowingly opened a too old index, would it get this exception always, if it will call optimize followed by close? Or is it a special scenario hit by the test?
        Hide
        Michael McCandless added a comment -

        Does this mean I'll need to catch that exception every time I close an IW, or at least prepare to catch it?

        Well, IndexFormatTooOldExc subclasses IOE... but, yes, if there's a risk you'll open a too-old index, you should try to handle this.

        IW.close does alot... flush the last segment, let MS run any pending merges, do commit, delete now-not-need files, etc. So there's plenty of chances for interested excs.

        Is it only relevant to the test?

        Well, this test opens an IW on a too-old index... so if your app may do that....

        Can IW swallow those exceptions internally, and relieve the application from all this?

        Whoa no way!

        Show
        Michael McCandless added a comment - Does this mean I'll need to catch that exception every time I close an IW, or at least prepare to catch it? Well, IndexFormatTooOldExc subclasses IOE... but, yes, if there's a risk you'll open a too-old index, you should try to handle this. IW.close does alot... flush the last segment, let MS run any pending merges, do commit, delete now-not-need files, etc. So there's plenty of chances for interested excs. Is it only relevant to the test? Well, this test opens an IW on a too-old index... so if your app may do that.... Can IW swallow those exceptions internally, and relieve the application from all this? Whoa no way!
        Hide
        Shai Erera added a comment -

        Does this mean I'll need to catch that exception every time I close an IW, or at least prepare to catch it? If so, shouldn't we document it? Is it only relevant to the test?

        Somehow this change / fix starts to get complicated. Can IW swallow those exceptions internally, and relieve the application from all this? When I close(false), I should be prepared to hit MergeAbortedException, it's kinda part of the API contract. But when I close(true), why do I need to be prepared to handle any exception, except for real IO ones?

        Show
        Shai Erera added a comment - Does this mean I'll need to catch that exception every time I close an IW, or at least prepare to catch it? If so, shouldn't we document it? Is it only relevant to the test? Somehow this change / fix starts to get complicated. Can IW swallow those exceptions internally, and relieve the application from all this? When I close(false), I should be prepared to hit MergeAbortedException, it's kinda part of the API contract. But when I close(true), why do I need to be prepared to handle any exception, except for real IO ones?
        Hide
        Michael McCandless added a comment -

        OK so I think we should fix this test to also accept an IndexTooOldExc during close.

        The .optimize() call for only the 29.nocfs case (for some reason) enrolls 2 pending merges to IW.

        The 1st merge hits an exception, throwing up through the .optimize() to the test. But the 2nd merge remains queued, and in IW.close() we give MS a chance to run any merges it needs to, and that 2nd merge then also hits an exc.

        Show
        Michael McCandless added a comment - OK so I think we should fix this test to also accept an IndexTooOldExc during close. The .optimize() call for only the 29.nocfs case (for some reason) enrolls 2 pending merges to IW. The 1st merge hits an exception, throwing up through the .optimize() to the test. But the 2nd merge remains queued, and in IW.close() we give MS a chance to run any merges it needs to, and that 2nd merge then also hits an exc.
        Hide
        Michael McCandless added a comment -

        Hmm.... indeed you can repro with:

        ant test-core -Dtestcase=TestBackwardsCompatibility -Dtestmethod=testUnsupportedOldIndexes -Dtests.seed=-7202471693621265890:9015568443891620555

        I'll revert until I can figure this out... sorry!

        Show
        Michael McCandless added a comment - Hmm.... indeed you can repro with: ant test-core -Dtestcase=TestBackwardsCompatibility -Dtestmethod=testUnsupportedOldIndexes -Dtests.seed=-7202471693621265890:9015568443891620555 I'll revert until I can figure this out... sorry!
        Hide
        Uwe Schindler added a comment -

        This commit sometimes fails TestBackwards because IndexWriter.close() now also throws IndexFormatTooOldException if the previous call to optimize() have thrown it already.

        Show
        Uwe Schindler added a comment - This commit sometimes fails TestBackwards because IndexWriter.close() now also throws IndexFormatTooOldException if the previous call to optimize() have thrown it already.
        Hide
        Michael McCandless added a comment -

        OK thanks Shai... I'll commit shortly.

        Show
        Michael McCandless added a comment - OK thanks Shai... I'll commit shortly.
        Hide
        Shai Erera added a comment -

        Ok - I agree maybeMerge is probably less frequently called than optimize. And perhaps we can look at it that way: when you call optimize, you know exactly what to expect. You control the # of final segments. When you call maybeMerge lucene does not guarantee the final result. Unless you know exactly the state of all the segments in the index (which except than from unit tests I think it's very unlikely) and exactly what your MP is doing, you cannot expect any guaranteed outcome from calling maybeMerge, except for it "doing the best effort".

        What bothered me is that even if maybeMerge and optimize may go through several levels of merging following one call to them, one is guaranteed to complete and the other isn't. But since optimize is more common in apps than the other, I'm willing to make that exception. Perhaps then add to maybeMerge docs that if you want to guarantee merges finish when close is called, you should wait for merges? Or should we add it to close?

        I'm fine now with this fix. +1 to commit.

        Show
        Shai Erera added a comment - Ok - I agree maybeMerge is probably less frequently called than optimize. And perhaps we can look at it that way: when you call optimize, you know exactly what to expect. You control the # of final segments. When you call maybeMerge lucene does not guarantee the final result. Unless you know exactly the state of all the segments in the index (which except than from unit tests I think it's very unlikely) and exactly what your MP is doing, you cannot expect any guaranteed outcome from calling maybeMerge, except for it "doing the best effort". What bothered me is that even if maybeMerge and optimize may go through several levels of merging following one call to them, one is guaranteed to complete and the other isn't. But since optimize is more common in apps than the other, I'm willing to make that exception. Perhaps then add to maybeMerge docs that if you want to guarantee merges finish when close is called, you should wait for merges? Or should we add it to close? I'm fine now with this fix. +1 to commit.
        Hide
        Michael McCandless added a comment -

        Just want to point out that calling maybeMerge is as explicit as calling optimize.

        But: apps don't normally call maybeMerge? This is typically called within IW, eg on segment flush.

        I mean, it is public so apps can call it, but I expect very few do (vs optimize which apps use alot). It's the exception not the rule...

        I guess I feel that close should try to close quickly – an app would not expect close to randomly take a long time (it's already bad enough since a large merge could be in process...). So, allowing other merges to start up, which could easily be large merges since they are follow-on ones, would make that worse.

        Alternatively, we could define the semantics of close as being allowed to prevent a running optimize from actually completing? Then we'd have to change this test, eg to call .waitForMerges before close.

        Show
        Michael McCandless added a comment - Just want to point out that calling maybeMerge is as explicit as calling optimize. But: apps don't normally call maybeMerge? This is typically called within IW, eg on segment flush. I mean, it is public so apps can call it, but I expect very few do (vs optimize which apps use alot). It's the exception not the rule... I guess I feel that close should try to close quickly – an app would not expect close to randomly take a long time (it's already bad enough since a large merge could be in process...). So, allowing other merges to start up, which could easily be large merges since they are follow-on ones, would make that worse. Alternatively, we could define the semantics of close as being allowed to prevent a running optimize from actually completing? Then we'd have to change this test, eg to call .waitForMerges before close.
        Hide
        Shai Erera added a comment -

        I don't personally mind either way. Just want to point out that calling maybeMerge is as explicit as calling optimize. You can argue for both that if an app wants to wait for merges it can call waitForMerges. In fact, an app calling close() already stated it wants to wait for merges - it's as if it called waitForMerges followed by close.

        I think you're trying to distinguish merges that started because the MP decided they should run following a certain commit to those triggered by explicit call to optimize. So IMO maybeMerge and optimize are the same as both were explicitly initiated by the application.

        This test fails because it assumes optimize will run to completion. What if the test assumed maybeMerge runs to completion? Isn't that a valid expectation from an application calling close()? We're also distinguishing the first round of merges from subsequent rounds, only when maybeMerge is called, but not optimize...

        Show
        Shai Erera added a comment - I don't personally mind either way. Just want to point out that calling maybeMerge is as explicit as calling optimize. You can argue for both that if an app wants to wait for merges it can call waitForMerges. In fact, an app calling close() already stated it wants to wait for merges - it's as if it called waitForMerges followed by close. I think you're trying to distinguish merges that started because the MP decided they should run following a certain commit to those triggered by explicit call to optimize. So IMO maybeMerge and optimize are the same as both were explicitly initiated by the application. This test fails because it assumes optimize will run to completion. What if the test assumed maybeMerge runs to completion? Isn't that a valid expectation from an application calling close()? We're also distinguishing the first round of merges from subsequent rounds, only when maybeMerge is called, but not optimize...
        Hide
        Michael McCandless added a comment -

        We do allow all running merges to run to completion.

        But, we don't allow new merges to start, unless it's part of an ongoing optimize (as of this patch).

        I think this distinction makes sense? Since optimize was an explicit call, it should run until completion. But merging can simply pick up the next time the index is opened?

        If an app really wants to allow all merges to run before closing (even new ones starting) it can call waitForMerges and then close.

        Show
        Michael McCandless added a comment - We do allow all running merges to run to completion. But, we don't allow new merges to start, unless it's part of an ongoing optimize (as of this patch). I think this distinction makes sense? Since optimize was an explicit call, it should run until completion. But merging can simply pick up the next time the index is opened? If an app really wants to allow all merges to run before closing (even new ones starting) it can call waitForMerges and then close.
        Hide
        Shai Erera added a comment -

        Ok Mike, that makes sense. You want to allow optimize() to finish all possible merges. Why then not let regular merges finish all the way through, even if we're closing? I mean, the application wants to wait for all running merges, so why is optimize() different than maybeMerge()?

        Show
        Shai Erera added a comment - Ok Mike, that makes sense. You want to allow optimize() to finish all possible merges. Why then not let regular merges finish all the way through, even if we're closing? I mean, the application wants to wait for all running merges, so why is optimize() different than maybeMerge()?
        Hide
        Robert Muir added a comment -

        thanks for tracking this down...!

        I think if we fix this one, then we are really into the long tail of random test fails (at least for now)

        Show
        Robert Muir added a comment - thanks for tracking this down...! I think if we fix this one, then we are really into the long tail of random test fails (at least for now)
        Hide
        Michael McCandless added a comment -

        If close(false) is called after optimize() was called, it means the app would like to abort merges ASAP. If so, why would we consult the MP if we're instructed to abort?

        Are you talking about a different use case?

        Sorry, different use case.

        This use case is you call .optimize(doWait=false) then you call a normal .close() (ie, wait for merges). In this case we wait for all running merges to finish, but don't start any new ones. My patch would still allow new ones to start if the merges are due to a running optimize.

        Your use case, where .close(false) is called, will in fact abort all running merges and close quickly. Ie we will not start new merges, even for optimize, if you pass false to close, with this pattch.

        Show
        Michael McCandless added a comment - If close(false) is called after optimize() was called, it means the app would like to abort merges ASAP. If so, why would we consult the MP if we're instructed to abort? Are you talking about a different use case? Sorry, different use case. This use case is you call .optimize(doWait=false) then you call a normal .close() (ie, wait for merges). In this case we wait for all running merges to finish, but don't start any new ones. My patch would still allow new ones to start if the merges are due to a running optimize. Your use case, where .close(false) is called, will in fact abort all running merges and close quickly. Ie we will not start new merges, even for optimize, if you pass false to close, with this pattch.
        Hide
        Shai Erera added a comment -

        For education purposes - why should we consult the MP if it's an optimize, even while closing? If close(false) is called after optimize() was called, it means the app would like to abort merges ASAP. If so, why would we consult the MP if we're instructed to abort?

        Are you talking about a different use case?

        Show
        Shai Erera added a comment - For education purposes - why should we consult the MP if it's an optimize, even while closing? If close(false) is called after optimize() was called, it means the app would like to abort merges ASAP. If so, why would we consult the MP if we're instructed to abort? Are you talking about a different use case?
        Hide
        Michael McCandless added a comment -

        I think I found this!

        After a merge completes, IW then checks w/ the merge policy to see if followon merges are now necessary.

        But this check is skipped if IW.close is pending (ie has been called and is waiting for merges to complete).

        However, if that merge is an optimize, then we should in fact consult the merge policy even when a close is pending, which we are not doing today.

        Tiny patch (attached) should fix it.

        Show
        Michael McCandless added a comment - I think I found this! After a merge completes, IW then checks w/ the merge policy to see if followon merges are now necessary. But this check is skipped if IW.close is pending (ie has been called and is waiting for merges to complete). However, if that merge is an optimize, then we should in fact consult the merge policy even when a close is pending, which we are not doing today. Tiny patch (attached) should fix it.
        Hide
        Mark Miller added a comment -

        I'm catching something similar on current tests I think:

           [junit] Testsuite: org.apache.lucene.index.TestThreadedOptimize
            [junit] Testcase: testThreadedOptimize(org.apache.lucene.index.TestThreadedOptimize):	FAILED
            [junit] expected:<248> but was:<256>
            [junit] junit.framework.AssertionFailedError: expected:<248> but was:<256>
            [junit] 	at org.apache.lucene.index.TestThreadedOptimize.runTest(TestThreadedOptimize.java:119)
            [junit] 	at org.apache.lucene.index.TestThreadedOptimize.testThreadedOptimize(TestThreadedOptimize.java:142)
            [junit] 	at org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:380)
            [junit] 	at org.apache.lucene.util.LuceneTestCase.run(LuceneTestCase.java:372)
            [junit] 
            [junit] 
            [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 0.733 sec
        
        Show
        Mark Miller added a comment - I'm catching something similar on current tests I think: [junit] Testsuite: org.apache.lucene.index.TestThreadedOptimize [junit] Testcase: testThreadedOptimize(org.apache.lucene.index.TestThreadedOptimize): FAILED [junit] expected:<248> but was:<256> [junit] junit.framework.AssertionFailedError: expected:<248> but was:<256> [junit] at org.apache.lucene.index.TestThreadedOptimize.runTest(TestThreadedOptimize.java:119) [junit] at org.apache.lucene.index.TestThreadedOptimize.testThreadedOptimize(TestThreadedOptimize.java:142) [junit] at org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:380) [junit] at org.apache.lucene.util.LuceneTestCase.run(LuceneTestCase.java:372) [junit] [junit] [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 0.733 sec
        Hide
        Shai Erera added a comment -

        Sorry, I've missed the part about this happening in backwards tests. The line numbers match for me, and I see the assertion messages. I do think though that additional info such as the number of segments and deleted docs would be useful, since reader.isOptimize() will return false if either of these two is wrong.

        And we can add the same message to the regular test as well ...

        Show
        Shai Erera added a comment - Sorry, I've missed the part about this happening in backwards tests. The line numbers match for me, and I see the assertion messages. I do think though that additional info such as the number of segments and deleted docs would be useful, since reader.isOptimize() will return false if either of these two is wrong. And we can add the same message to the regular test as well ...
        Hide
        Shai Erera added a comment -

        I'm guessing that this line fails (which is 126 in my most recent checkout):

              assertTrue(reader.isOptimized());
        

        Is this the one that's pointed by your code Mike? If so, I suggest we include a message to the assertion, something like "index should be optimized". It's annoying that JUnit does not print "should be true but was false", or something like that, and instead prints 'null', which is more intimidating .

        Perhaps we should also add some more info to the print, like the number of segments in and index and whether there are deletions, so we'd have a better clue why the test failed?

        I've tried to run the test a couple of times, but it passed ...

        Show
        Shai Erera added a comment - I'm guessing that this line fails (which is 126 in my most recent checkout): assertTrue(reader.isOptimized()); Is this the one that's pointed by your code Mike? If so, I suggest we include a message to the assertion, something like "index should be optimized". It's annoying that JUnit does not print "should be true but was false", or something like that, and instead prints 'null', which is more intimidating . Perhaps we should also add some more info to the print, like the number of segments in and index and whether there are deletions, so we'd have a better clue why the test failed? I've tried to run the test a couple of times, but it passed ...

          People

          • Assignee:
            Unassigned
            Reporter:
            Michael McCandless
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development