Hadoop Common
  1. Hadoop Common
  2. HADOOP-8097

TestRPCCallBenchmark failing w/ port in use -handling badly

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Minor Minor
    • Resolution: Cannot Reproduce
    • Affects Version/s: 0.24.0
    • Fix Version/s: 0.24.0
    • Component/s: ipc
    • Labels:
      None
    • Environment:

      6 core xeon w/ 12GB RAM, hyperthreading enabled; Ubuntu 10.04 w/ Java6 OpenJDK

      Description

      I'm seeing TestRPCCallBenchmark fail with port in use, which is probably related to some other test (race condition on shutdown?), but which isn't being handled that well in the test itself -although the log shows the binding exception, the test is failing on a connection timeout

      1. HADOOP-8097.patch
        1.0 kB
        Steve Loughran

        Issue Links

          Activity

          Hide
          Steve Loughran added a comment -

          Marking as can't reproduce until someone else encounters it.

          Show
          Steve Loughran added a comment - Marking as can't reproduce until someone else encounters it.
          Hide
          Steve Loughran added a comment -

          @Todd -seems reasonable. I've updated the env details for better diagnostics. That desktop through up a lot of race conditions in other code through a combination of speed and parallelism -this test failure is probably another example -but as it's of the test classes, it matters less

          Show
          Steve Loughran added a comment - @Todd -seems reasonable. I've updated the env details for better diagnostics. That desktop through up a lot of race conditions in other code through a combination of speed and parallelism -this test failure is probably another example -but as it's of the test classes, it matters less
          Hide
          Todd Lipcon added a comment -

          I don't think it's changed, so probably it's still a problem. That said, I'd never seen this test fail, so probably not worth fixing unless we have an environment handy in which we can reproduce it.

          Show
          Todd Lipcon added a comment - I don't think it's changed, so probably it's still a problem. That said, I'd never seen this test fail, so probably not worth fixing unless we have an environment handy in which we can reproduce it.
          Hide
          Steve Loughran added a comment -

          todd, do you think this is still a problem? The box I had which showed it is no longer in my possession

          Show
          Steve Loughran added a comment - todd, do you think this is still a problem? The box I had which showed it is no longer in my possession
          Hide
          Steve Loughran added a comment -

          Both of those would be good, because fixed ports invariably cause problems on jenkins builds. Are you going to fix this?

          Show
          Steve Loughran added a comment - Both of those would be good, because fixed ports invariably cause problems on jenkins builds. Are you going to fix this?
          Hide
          Todd Lipcon added a comment -

          I'm not sure this is the best fix (relying on a different static port). A few other ideas:

          • change the benchmark so that if a port isn't specified, it binds to port 0, and then has the clients connect to whichever port gets bound
          • make sure it uses REUSEADDR so that it can still bind despite the TIME_WAIT sockets

          Either of those make sense? I honestly thought I'd written it to use port 0 but apparently I didn't

          Show
          Todd Lipcon added a comment - I'm not sure this is the best fix (relying on a different static port). A few other ideas: change the benchmark so that if a port isn't specified, it binds to port 0, and then has the clients connect to whichever port gets bound make sure it uses REUSEADDR so that it can still bind despite the TIME_WAIT sockets Either of those make sense? I honestly thought I'd written it to use port 0 but apparently I didn't
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12515354/HADOOP-8097.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in .

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/616//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/616//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12515354/HADOOP-8097.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/616//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/616//console This message is automatically generated.
          Hide
          Steve Loughran added a comment -

          HADOOP-8070 added this test case, so is the root cause; this patch is therefore trunk-only

          Show
          Steve Loughran added a comment - HADOOP-8070 added this test case, so is the root cause; this patch is therefore trunk-only
          Hide
          Steve Loughran added a comment -

          patch fixes ports, verified by a netstat

          tcp 0 0 wildhaus:12351 wildhaus:57692 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57701 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57686 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57681 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51429 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57698 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57705 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51409 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57687 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57703 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57708 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57689 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51436 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57684 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51428 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57699 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51407 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51420 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57707 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57700 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51432 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51412 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51435 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57682 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51417 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57683 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57690 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57694 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57688 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51430 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51424 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51413 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51423 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51406 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51414 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57702 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51433 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57697 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57691 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51426 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57696 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51427 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51422 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51431 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57706 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57704 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51425 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51434 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51410 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51415 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57709 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57685 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57710 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51418 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57695 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51421 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51416 TIME_WAIT
          tcp 0 0 wildhaus:12351 wildhaus:57693 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51411 TIME_WAIT
          tcp 0 0 wildhaus:12350 wildhaus:51419 TIME_WAIT

          Show
          Steve Loughran added a comment - patch fixes ports, verified by a netstat tcp 0 0 wildhaus:12351 wildhaus:57692 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57701 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57686 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57681 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51429 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57698 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57705 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51409 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57687 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57703 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57708 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57689 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51436 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57684 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51428 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57699 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51407 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51420 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57707 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57700 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51432 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51412 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51435 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57682 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51417 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57683 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57690 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57694 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57688 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51430 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51424 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51413 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51423 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51406 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51414 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57702 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51433 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57697 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57691 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51426 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57696 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51427 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51422 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51431 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57706 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57704 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51425 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51434 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51410 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51415 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57709 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57685 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57710 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51418 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57695 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51421 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51416 TIME_WAIT tcp 0 0 wildhaus:12351 wildhaus:57693 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51411 TIME_WAIT tcp 0 0 wildhaus:12350 wildhaus:51419 TIME_WAIT
          Hide
          Steve Loughran added a comment -

          running the test explicitly (rather than in the bulk test run) passes; a netstat run immediately after shows 50+ TCP connections to port 12345 in TIME_WAIT, which is probably triggering the timeout, as Both test cases in TestRPCCompatibility use the same (default) port of 12345.

          I propose using an explicit port number in both as another argument. This will avoid inter-test port clashes, in which test #2 runs while the socket is still in TIME_WAIT.

          Sans doesn't note anything legitmate running on this port number, but as most traffic is trojan it is probably best to change to a new value
          http://isc.sans.edu/port.html?port=12345

          ports [12350, 12351] appear low risk

          Show
          Steve Loughran added a comment - running the test explicitly (rather than in the bulk test run) passes; a netstat run immediately after shows 50+ TCP connections to port 12345 in TIME_WAIT, which is probably triggering the timeout, as Both test cases in TestRPCCompatibility use the same (default) port of 12345. I propose using an explicit port number in both as another argument. This will avoid inter-test port clashes, in which test #2 runs while the socket is still in TIME_WAIT. Sans doesn't note anything legitmate running on this port number, but as most traffic is trojan it is probably best to change to a new value http://isc.sans.edu/port.html?port=12345 ports [12350, 12351] appear low risk
          Hide
          Steve Loughran added a comment -

          Stack trace and error log

          testBenchmarkWithWritable(org.apache.hadoop.ipc.TestRPCCallBenchmark) Time elapsed: 20.007 sec <<< ERROR!
          java.lang.Exception: test timed out after 20000 milliseconds
          at java.lang.Object.wait(Native Method)
          at java.lang.Thread.join(Thread.java:1186)
          at java.lang.Thread.join(Thread.java:1239)
          at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:163)
          at org.apache.hadoop.ipc.RPCCallBenchmark.run(RPCCallBenchmark.java:306)
          at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
          at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:83)
          at org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithWritable(TestRPCCallBenchmark.java:30)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          at java.lang.reflect.Method.invoke(Method.java:597)
          at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
          at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
          at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
          at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
          at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
          testBenchmarkWithProto(org.apache.hadoop.ipc.TestRPCCallBenchmark) Time elapsed: 13.197 sec <<< ERROR!
          java.net.BindException: Problem binding to [0.0.0.0:12345] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
          at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:675)
          at org.apache.hadoop.ipc.Server.bind(Server.java:309)
          at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:402)
          at org.apache.hadoop.ipc.Server.<init>(Server.java:1742)
          at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:830)
          at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:350)
          at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:329)
          at org.apache.hadoop.ipc.RPC.getServer(RPC.java:661)
          at org.apache.hadoop.ipc.RPCCallBenchmark.startServer(RPCCallBenchmark.java:230)
          at org.apache.hadoop.ipc.RPCCallBenchmark.run(RPCCallBenchmark.java:261)
          at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
          at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:83)
          at org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithProto(TestRPCCallBenchmark.java:43)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          at java.lang.reflect.Method.invoke(Method.java:597)
          at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
          at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
          at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
          at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
          at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)

          Show
          Steve Loughran added a comment - Stack trace and error log testBenchmarkWithWritable(org.apache.hadoop.ipc.TestRPCCallBenchmark) Time elapsed: 20.007 sec <<< ERROR! java.lang.Exception: test timed out after 20000 milliseconds at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1186) at java.lang.Thread.join(Thread.java:1239) at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:163) at org.apache.hadoop.ipc.RPCCallBenchmark.run(RPCCallBenchmark.java:306) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:83) at org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithWritable(TestRPCCallBenchmark.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) testBenchmarkWithProto(org.apache.hadoop.ipc.TestRPCCallBenchmark) Time elapsed: 13.197 sec <<< ERROR! java.net.BindException: Problem binding to [0.0.0.0:12345] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:675) at org.apache.hadoop.ipc.Server.bind(Server.java:309) at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:402) at org.apache.hadoop.ipc.Server.<init>(Server.java:1742) at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:830) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:350) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:329) at org.apache.hadoop.ipc.RPC.getServer(RPC.java:661) at org.apache.hadoop.ipc.RPCCallBenchmark.startServer(RPCCallBenchmark.java:230) at org.apache.hadoop.ipc.RPCCallBenchmark.run(RPCCallBenchmark.java:261) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:83) at org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithProto(TestRPCCallBenchmark.java:43) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)

            People

            • Assignee:
              Steve Loughran
              Reporter:
              Steve Loughran
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development