Solr
  1. Solr
  2. SOLR-6275

Improve accuracy of QTime reporting

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 5.1
    • Component/s: search
    • Labels:
      None

      Description

      Currently, QTime uses currentTimeMillis instead of nano Time and hence is not suitable for time measurements. Further, it is really started after all the dispatch logic in SolrDispatchFilter (same with the top level timing reported by debug=timing) which may or may not be expensive, and hence may not fully represent the time taken by the search. This is to remedy both cases.

      1. SOLR-6275.patch
        17 kB
        Ramkumar Aiyengar

        Activity

        Hide
        ASF GitHub Bot added a comment -

        GitHub user andyetitmoves opened a pull request:

        https://github.com/apache/lucene-solr/pull/70

        solr: Start RTimer for SearchHandler from right when the request starts

        Patch for SOLR-6275

        You can merge this pull request into a Git repository by running:

        $ git pull https://github.com/bloomberg/lucene-solr trunk-rtimer-qtime

        Alternatively you can review and apply these changes as the patch at:

        https://github.com/apache/lucene-solr/pull/70.patch

        To close this pull request, make a commit to your master/trunk branch
        with (at least) the following in the commit message:

        This closes #70


        commit 4dbe0b2660331de10944d8b0290a4f7fcae0f1ea
        Author: Ramkumar Aiyengar <raiyengar@bloomberg.net>
        Date: 2014-07-03T18:55:37Z

        solr: Start RTimer for SearchHandler from right when the request starts


        Show
        ASF GitHub Bot added a comment - GitHub user andyetitmoves opened a pull request: https://github.com/apache/lucene-solr/pull/70 solr: Start RTimer for SearchHandler from right when the request starts Patch for SOLR-6275 You can merge this pull request into a Git repository by running: $ git pull https://github.com/bloomberg/lucene-solr trunk-rtimer-qtime Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/70.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #70 commit 4dbe0b2660331de10944d8b0290a4f7fcae0f1ea Author: Ramkumar Aiyengar <raiyengar@bloomberg.net> Date: 2014-07-03T18:55:37Z solr: Start RTimer for SearchHandler from right when the request starts
        Hide
        Erick Erickson added a comment -

        Hmmm, I'm not sure I agree. QTime is useful for knowing how long the lower level stuff took.
        Thus it's useful as it stands.

        Having the dispatch time reported too seems valuable, but I don't think folding it into
        QTime is a good idea. Reporting both, however, gives me more information to work
        with and a much better idea of what to look for for explaining query slowness than
        if the dispatch time was folded into QTime. A better alternative would be to include
        this time as a separate component in the return timings block?

        -1 on switching to nanoTime for reporting

        I mean we're talking human-time here. When I'm asking how long a query took,
        my frail human system is not capable of noticing a difference of a millisecond or two.
        nanoTime is not necessarily all that accurate either. Sure, it gives a lot or precision,
        but that's different from accuracy.

        "This method provides nanosecond precision, but not necessarily nanosecond accuracy.
        No guarantees are made about how frequently values change"

        Frankly, this doesn't seem worth the change to me. And especially if we return the full
        nanoTime string which would render any current tools for reporting performance
        metrics invalid by a factor 100,000, all for extremely questionable utility.

        Show
        Erick Erickson added a comment - Hmmm, I'm not sure I agree. QTime is useful for knowing how long the lower level stuff took. Thus it's useful as it stands. Having the dispatch time reported too seems valuable, but I don't think folding it into QTime is a good idea. Reporting both, however, gives me more information to work with and a much better idea of what to look for for explaining query slowness than if the dispatch time was folded into QTime. A better alternative would be to include this time as a separate component in the return timings block? -1 on switching to nanoTime for reporting I mean we're talking human-time here. When I'm asking how long a query took, my frail human system is not capable of noticing a difference of a millisecond or two. nanoTime is not necessarily all that accurate either. Sure, it gives a lot or precision, but that's different from accuracy. "This method provides nanosecond precision, but not necessarily nanosecond accuracy. No guarantees are made about how frequently values change" Frankly, this doesn't seem worth the change to me. And especially if we return the full nanoTime string which would render any current tools for reporting performance metrics invalid by a factor 100,000, all for extremely questionable utility.
        Hide
        Ramkumar Aiyengar added a comment - - edited

        Hmmm, I'm not sure I agree. QTime is useful for knowing how long the lower level stuff took. Thus it's useful as it stands.

        Can you elaborate what you mean by "lower level stuff"? If you are intending QTime to reflect time taken by Lucene, i.e. roughly the time taken by the top-level collector to collect the results needed, QTime covers way more than that already, including time taken for query parsing and finishing up just in the non-distributed case. In the distributed case, it also covers time taken waiting on the shard handler factory pool, network latency, any servlet container pooling time at the processing shards, time taken waiting for the federating node to take all responses, and time taken to merge all responses.

        QTime as it stands is currently defined from the time the SolrQueryRequest object is created till the response is rendered, which is hard to associate any semantic meaning to, and it roughly is all steps except the logic required for resolving the core to send the request to. All I am doing is adding that as well, to logically mean (again roughly, but less roughly) "time taken by the servlet to service the request".

        I mean we're talking human-time here. When I'm asking how long a query took, my frail human system is not capable of noticing a difference of a millisecond or two. nanoTime is not necessarily all that accurate either. Sure, it gives a lot or precision, but that's different from accuracy.

        The problem is not because of millisecond vs. nanosecond accuracy. currentTimeMillis represents the wall clock of the system and is subject to issues like clock skew. For example, if NTP resets the time or if the sys admin changes the time for some reason or if some other action changes the wall time, the difference between two such measurements can be totally incorrect (including being negative). Which is why nanoTime is preferred for timing measurements as it's guaranteed to be monotonic (where the OS supports it, i.e. everywhere except some older versions of Windows). See SOLR-5734 for some context where we changed all references of this thing, QTime was left out.

        Frankly, this doesn't seem worth the change to me. And especially if we return the full nanoTime string which would render any current tools for reporting performance metrics invalid by a factor 100,000, all for extremely questionable utility.

        We are still reporting as milliseconds, no change in resolution here. The RTimer utility class used for this purpose already does the conversion.

        Show
        Ramkumar Aiyengar added a comment - - edited Hmmm, I'm not sure I agree. QTime is useful for knowing how long the lower level stuff took. Thus it's useful as it stands. Can you elaborate what you mean by "lower level stuff"? If you are intending QTime to reflect time taken by Lucene, i.e. roughly the time taken by the top-level collector to collect the results needed, QTime covers way more than that already, including time taken for query parsing and finishing up just in the non-distributed case. In the distributed case, it also covers time taken waiting on the shard handler factory pool, network latency, any servlet container pooling time at the processing shards, time taken waiting for the federating node to take all responses, and time taken to merge all responses. QTime as it stands is currently defined from the time the SolrQueryRequest object is created till the response is rendered, which is hard to associate any semantic meaning to, and it roughly is all steps except the logic required for resolving the core to send the request to. All I am doing is adding that as well, to logically mean (again roughly, but less roughly) "time taken by the servlet to service the request". I mean we're talking human-time here. When I'm asking how long a query took, my frail human system is not capable of noticing a difference of a millisecond or two. nanoTime is not necessarily all that accurate either. Sure, it gives a lot or precision, but that's different from accuracy. The problem is not because of millisecond vs. nanosecond accuracy. currentTimeMillis represents the wall clock of the system and is subject to issues like clock skew. For example, if NTP resets the time or if the sys admin changes the time for some reason or if some other action changes the wall time, the difference between two such measurements can be totally incorrect (including being negative). Which is why nanoTime is preferred for timing measurements as it's guaranteed to be monotonic (where the OS supports it, i.e. everywhere except some older versions of Windows). See SOLR-5734 for some context where we changed all references of this thing, QTime was left out. Frankly, this doesn't seem worth the change to me. And especially if we return the full nanoTime string which would render any current tools for reporting performance metrics invalid by a factor 100,000, all for extremely questionable utility. We are still reporting as milliseconds, no change in resolution here. The RTimer utility class used for this purpose already does the conversion.
        Hide
        Mark Miller added a comment -

        hence is not suitable for time measurements.

        +1, we should probably fix it.

        This is where the performance issue that I've read about with concurrent requests to nano time scare me though. This happens per request. If that happened to be a decent measurable hit, it may be better that this method returns bad information .0001% of the time.

        Show
        Mark Miller added a comment - hence is not suitable for time measurements. +1, we should probably fix it. This is where the performance issue that I've read about with concurrent requests to nano time scare me though. This happens per request. If that happened to be a decent measurable hit, it may be better that this method returns bad information .0001% of the time.
        Hide
        Ramkumar Aiyengar added a comment -

        From what I have seen from googling, yes, nanoTime is way slower (only on Windows, there are claims its actually faster on Linux – not that I buy that), currentTimeMillis takes a few nanoseconds and nanoTime a microsecond or two. But with what we are dealing with, I doubt it matters, esp. once per request. I didn't se anything about concurrency though, you have a link?

        Theoretically, we could add a flag to RTimer which falls back to currentTimeMillis on windows alone (ugh), but I doubt the ugliness is warranted.

        Show
        Ramkumar Aiyengar added a comment - From what I have seen from googling, yes, nanoTime is way slower (only on Windows, there are claims its actually faster on Linux – not that I buy that), currentTimeMillis takes a few nanoseconds and nanoTime a microsecond or two. But with what we are dealing with, I doubt it matters, esp. once per request. I didn't se anything about concurrency though, you have a link? Theoretically, we could add a flag to RTimer which falls back to currentTimeMillis on windows alone (ugh), but I doubt the ugliness is warranted.
        Hide
        Mark Miller added a comment - - edited

        This one perhaps? http://shipilev.net/blog/2014/nanotrusting-nanotime/

        There were 2 good posts on nanotime floating around a couple/few months ago, so perhaps the other one. In any case, easy enough to spot check with a simple benchmark I think. Though there are then the variations between OS's and such, but I'm sure that would come to our attention and wouldn't be very likely if it's not really noticeable on one setup.

        Show
        Mark Miller added a comment - - edited This one perhaps? http://shipilev.net/blog/2014/nanotrusting-nanotime/ There were 2 good posts on nanotime floating around a couple/few months ago, so perhaps the other one. In any case, easy enough to spot check with a simple benchmark I think. Though there are then the variations between OS's and such, but I'm sure that would come to our attention and wouldn't be very likely if it's not really noticeable on one setup.
        Hide
        Mark Miller added a comment -

        But yeah, I'm not really worried about the single cost call difference. I just have a faint memory reading that it's the concurrent calls to nanotime that will kill you.

        Show
        Mark Miller added a comment - But yeah, I'm not really worried about the single cost call difference. I just have a faint memory reading that it's the concurrent calls to nanotime that will kill you.
        Hide
        Ramkumar Aiyengar added a comment -

        I added a rudimentary test for System.nanoTime performance, there's some perf degradation when you increase the number of threads but it's not alarming. And actually beyond a certain number of threads, I think some kind of caching kicks in and the average time dips considerably. I have updated the pull request with the test.

        The following output was from a Linux machine with 16 cores..

        718 T11 oasu.TestUtils.testNanoTimeSpeed testNanoTime: maxNumThreads = 100, numIters = 1000
        723 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 1, time_per_call = 423ns
        724 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 2, time_per_call = 295ns
        725 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 3, time_per_call = 109ns
        726 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 4, time_per_call = 491ns
        727 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 5, time_per_call = 747ns
        728 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 6, time_per_call = 851ns
        729 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 7, time_per_call = 1031ns
        731 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 8, time_per_call = 453ns
        731 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 9, time_per_call = 42ns
        732 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 10, time_per_call = 77ns
        733 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 11, time_per_call = 78ns
        733 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 12, time_per_call = 44ns
        734 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 13, time_per_call = 74ns
        736 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 14, time_per_call = 47ns
        737 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 15, time_per_call = 46ns
        738 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 16, time_per_call = 68ns
        738 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 17, time_per_call = 45ns
        738 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 18, time_per_call = 46ns
        739 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 19, time_per_call = 46ns
        739 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 20, time_per_call = 47ns
        740 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 21, time_per_call = 47ns
        741 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 22, time_per_call = 65ns
        741 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 23, time_per_call = 48ns
        742 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 24, time_per_call = 47ns
        ...
        795 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 96, time_per_call = 48ns
        796 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 97, time_per_call = 47ns
        796 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 98, time_per_call = 47ns
        799 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 99, time_per_call = 67ns
        800 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 100, time_per_call = 55ns
        
        Show
        Ramkumar Aiyengar added a comment - I added a rudimentary test for System.nanoTime performance, there's some perf degradation when you increase the number of threads but it's not alarming. And actually beyond a certain number of threads, I think some kind of caching kicks in and the average time dips considerably. I have updated the pull request with the test. The following output was from a Linux machine with 16 cores.. 718 T11 oasu.TestUtils.testNanoTimeSpeed testNanoTime: maxNumThreads = 100, numIters = 1000 723 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 1, time_per_call = 423ns 724 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 2, time_per_call = 295ns 725 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 3, time_per_call = 109ns 726 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 4, time_per_call = 491ns 727 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 5, time_per_call = 747ns 728 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 6, time_per_call = 851ns 729 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 7, time_per_call = 1031ns 731 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 8, time_per_call = 453ns 731 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 9, time_per_call = 42ns 732 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 10, time_per_call = 77ns 733 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 11, time_per_call = 78ns 733 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 12, time_per_call = 44ns 734 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 13, time_per_call = 74ns 736 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 14, time_per_call = 47ns 737 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 15, time_per_call = 46ns 738 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 16, time_per_call = 68ns 738 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 17, time_per_call = 45ns 738 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 18, time_per_call = 46ns 739 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 19, time_per_call = 46ns 739 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 20, time_per_call = 47ns 740 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 21, time_per_call = 47ns 741 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 22, time_per_call = 65ns 741 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 23, time_per_call = 48ns 742 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 24, time_per_call = 47ns ... 795 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 96, time_per_call = 48ns 796 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 97, time_per_call = 47ns 796 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 98, time_per_call = 47ns 799 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 99, time_per_call = 67ns 800 T11 oasu.TestUtils.testNanoTimeSpeed numThreads = 100, time_per_call = 55ns
        Hide
        Ramkumar Aiyengar added a comment -

        Mark Miller, do those numbers sound reasonable to you?

        Show
        Ramkumar Aiyengar added a comment - Mark Miller , do those numbers sound reasonable to you?
        Hide
        Mark Miller added a comment -

        Yeah, I dont see a real issue with it.

        Show
        Mark Miller added a comment - Yeah, I dont see a real issue with it.
        Hide
        Ramkumar Aiyengar added a comment -

        Hey Mark, could this go in? Happy to help if there's anything else needed here..

        Show
        Ramkumar Aiyengar added a comment - Hey Mark, could this go in? Happy to help if there's anything else needed here..
        Hide
        Mark Miller added a comment -

        Sorry - been inactive for a while. Could I ask you to bring this one up to date as well

        Show
        Mark Miller added a comment - Sorry - been inactive for a while. Could I ask you to bring this one up to date as well
        Hide
        Ramkumar Aiyengar added a comment -

        Done!

        Show
        Ramkumar Aiyengar added a comment - Done!
        Hide
        Ramkumar Aiyengar added a comment -

        Brought this up to date again, could this go in?

        Show
        Ramkumar Aiyengar added a comment - Brought this up to date again, could this go in?
        Hide
        Ramkumar Aiyengar added a comment -

        Updated patch, passes tests and precommit.

        Show
        Ramkumar Aiyengar added a comment - Updated patch, passes tests and precommit.
        Hide
        Ramkumar Aiyengar added a comment -

        Mark, let me know if this attached patch looks good and I can commit it..

        Show
        Ramkumar Aiyengar added a comment - Mark, let me know if this attached patch looks good and I can commit it..
        Hide
        Mark Miller added a comment -

        +1, looks okay to me.

        Show
        Mark Miller added a comment - +1, looks okay to me.
        Hide
        ASF subversion and git services added a comment -

        Commit 1663829 from andyetitmoves@apache.org in branch 'dev/trunk'
        [ https://svn.apache.org/r1663829 ]

        SOLR-6275: Improve accuracy of QTime reporting

        Show
        ASF subversion and git services added a comment - Commit 1663829 from andyetitmoves@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1663829 ] SOLR-6275 : Improve accuracy of QTime reporting
        Hide
        ASF GitHub Bot added a comment -

        Github user andyetitmoves closed the pull request at:

        https://github.com/apache/lucene-solr/pull/70

        Show
        ASF GitHub Bot added a comment - Github user andyetitmoves closed the pull request at: https://github.com/apache/lucene-solr/pull/70
        Hide
        ASF subversion and git services added a comment -

        Commit 1663886 from andyetitmoves@apache.org in branch 'dev/branches/branch_5x'
        [ https://svn.apache.org/r1663886 ]

        SOLR-6275: Improve accuracy of QTime reporting

        Show
        ASF subversion and git services added a comment - Commit 1663886 from andyetitmoves@apache.org in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1663886 ] SOLR-6275 : Improve accuracy of QTime reporting
        Hide
        Ramkumar Aiyengar added a comment -

        Reopening this to resolve some Jenkins failures with the MacOSX..

        Uwe Schindler pointed me to http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-May/009496.html which talks about MacOSX using wall time instead (gettimeofday) – which would certainly fail this test (and is in some sense validation of why this issue exists! )

        We have two approaches here..

        • Disable the test for nanoTime altogether
        • Disable it just for MacOSX

        I haven't included reverting the change as it doesn't make things any better, the test failure is on a platform which is a no-op as far as this change is concerned anyway, and there haven't been other failures..

        Show
        Ramkumar Aiyengar added a comment - Reopening this to resolve some Jenkins failures with the MacOSX.. Uwe Schindler pointed me to http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-May/009496.html which talks about MacOSX using wall time instead (gettimeofday) – which would certainly fail this test (and is in some sense validation of why this issue exists! ) We have two approaches here.. Disable the test for nanoTime altogether Disable it just for MacOSX I haven't included reverting the change as it doesn't make things any better, the test failure is on a platform which is a no-op as far as this change is concerned anyway, and there haven't been other failures..
        Hide
        Erick Erickson added a comment -

        +1 for disabling on MacOSX.

        Show
        Erick Erickson added a comment - +1 for disabling on MacOSX.
        Hide
        Uwe Schindler added a comment -

        +1 to disable this test completely. As I said, its not testing Solr, it checks some stuff in the JVM - in a way thats likely to break on busy hardware (not only on OSX).

        Show
        Uwe Schindler added a comment - +1 to disable this test completely. As I said, its not testing Solr, it checks some stuff in the JVM - in a way thats likely to break on busy hardware (not only on OSX).
        Hide
        ASF subversion and git services added a comment -

        Commit 1666754 from Ramkumar Aiyengar in branch 'dev/trunk'
        [ https://svn.apache.org/r1666754 ]

        SOLR-6275: Remove nanoTime speed test

        Show
        ASF subversion and git services added a comment - Commit 1666754 from Ramkumar Aiyengar in branch 'dev/trunk' [ https://svn.apache.org/r1666754 ] SOLR-6275 : Remove nanoTime speed test
        Hide
        ASF subversion and git services added a comment -

        Commit 1666755 from Ramkumar Aiyengar in branch 'dev/branches/branch_5x'
        [ https://svn.apache.org/r1666755 ]

        SOLR-6275: Remove nanoTime speed test

        Show
        ASF subversion and git services added a comment - Commit 1666755 from Ramkumar Aiyengar in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1666755 ] SOLR-6275 : Remove nanoTime speed test
        Hide
        Ramkumar Aiyengar added a comment -

        I have removed the full test, mainly because it has grinded through for a while now, and only MacOSX seems to be failing mostly.

        At this point, this seems to be mostly testing (and failing) on OS scheduling due to the Jenkins machine running at full CPU usage.

        FWIW, the slow/non-monotonic nanoTime issue on MacOSX is now resolved.. https://bugs.openjdk.java.net/browse/JDK-8040140, and I have seen these failures with the fix applied (on Java 8) and without (on Java 7) – so it seems mainly due to scheduling.

        Show
        Ramkumar Aiyengar added a comment - I have removed the full test, mainly because it has grinded through for a while now, and only MacOSX seems to be failing mostly. At this point, this seems to be mostly testing (and failing) on OS scheduling due to the Jenkins machine running at full CPU usage. FWIW, the slow/non-monotonic nanoTime issue on MacOSX is now resolved.. https://bugs.openjdk.java.net/browse/JDK-8040140 , and I have seen these failures with the fix applied (on Java 8) and without (on Java 7) – so it seems mainly due to scheduling.
        Hide
        Timothy Potter added a comment -

        Bulk close after 5.1 release

        Show
        Timothy Potter added a comment - Bulk close after 5.1 release

          People

          • Assignee:
            Ramkumar Aiyengar
            Reporter:
            Ramkumar Aiyengar
          • Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development