Details

    • Type: Test Test
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.22.0
    • Fix Version/s: 0.22.0
    • Component/s: hdfs-client
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    1. hdfs-1310-1.txt
      2 kB
      sam rash
    2. hdfs-1310-2.txt
      2 kB
      sam rash

      Issue Links

        Activity

        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk-Commit #383 (See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/383/)
        HDFS-1310. The ClientDatanodeProtocol proxy should be stopped in DFSInputStream.readBlockLength(..). Contributed by sam rash

        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #383 (See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/383/ ) HDFS-1310 . The ClientDatanodeProtocol proxy should be stopped in DFSInputStream.readBlockLength(..). Contributed by sam rash
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Thanks Tanping for running the tests.

        No new unit tests needed because this is fixing an existing unit test.

        I have committed this. Thanks, sam.

        Show
        Tsz Wo Nicholas Sze added a comment - Thanks Tanping for running the tests. No new unit tests needed because this is fixing an existing unit test. I have committed this. Thanks, sam.
        Hide
        Tanping Wang added a comment -

        I run "ant test" on my Linux box under hadoop-hdfs. Besides this test case, there should be 8 test cases already failing, i.e.

        [junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED
        [junit] Test org.apache.hadoop.hdfs.TestFileStatus FAILED
        [junit] Test org.apache.hadoop.hdfs.TestHDFSServerPorts FAILED
        [junit] Test org.apache.hadoop.hdfs.TestHDFSTrash FAILED
        [junit] Test org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy FAILED
        [junit] Test org.apache.hadoop.fs.TestHDFSFileContextMainOperations FAILED
        [junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED
        [junit] Test org.apache.hadoop.hdfs.TestFiHFlush FAILED

        After applying the patch, these tests are stilling failing, but no other unit test cases failed.

        Show
        Tanping Wang added a comment - I run "ant test" on my Linux box under hadoop-hdfs. Besides this test case, there should be 8 test cases already failing, i.e. [junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED [junit] Test org.apache.hadoop.hdfs.TestFileStatus FAILED [junit] Test org.apache.hadoop.hdfs.TestHDFSServerPorts FAILED [junit] Test org.apache.hadoop.hdfs.TestHDFSTrash FAILED [junit] Test org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy FAILED [junit] Test org.apache.hadoop.fs.TestHDFSFileContextMainOperations FAILED [junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED [junit] Test org.apache.hadoop.hdfs.TestFiHFlush FAILED After applying the patch, these tests are stilling failing, but no other unit test cases failed.
        Hide
        sam rash added a comment -

        alright, I get a ton of failures on trunk w/o the patches. I don't have time right now to debug the issue--any chance someone with a clean test env can run these through?

        Show
        sam rash added a comment - alright, I get a ton of failures on trunk w/o the patches. I don't have time right now to debug the issue--any chance someone with a clean test env can run these through?
        Hide
        sam rash added a comment -

        several failures. but i had similar problems running the test suite before--it's something about my local env, i think. I'm going to re-run the suite w/o the patches to see what fails there (another cpl hours)

        Show
        sam rash added a comment - several failures. but i had similar problems running the test suite before--it's something about my local env, i think. I'm going to re-run the suite w/o the patches to see what fails there (another cpl hours)
        Hide
        Tsz Wo Nicholas Sze added a comment -

        It is "ant test".

        Show
        Tsz Wo Nicholas Sze added a comment - It is "ant test".
        Hide
        sam rash added a comment -

        I have not run it all the way through yet. Is it 'test' or 'test-core' these days?

        Show
        sam rash added a comment - I have not run it all the way through yet. Is it 'test' or 'test-core' these days?
        Hide
        Tsz Wo Nicholas Sze added a comment -

        No problem and thanks for running test-patch. Have you also run the unit tests?

        Show
        Tsz Wo Nicholas Sze added a comment - No problem and thanks for running test-patch. Have you also run the unit tests?
        Hide
        sam rash added a comment -

        my apologies for the delay--i came down with a cold right before the long weekend

        results of test-patch:

        [exec] -1 overall.
        [exec]
        [exec] +1 @author. The patch does not contain any @author tags.
        [exec]
        [exec] -1 tests included. The patch doesn't appear to include any new or modified tests.
        [exec] Please justify why no new tests are needed for this patch.
        [exec] Also please list what manual steps were performed to verify this patch.
        [exec]
        [exec] +1 javadoc. The javadoc tool did not generate any warning messages.
        [exec]
        [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings.
        [exec]
        [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings.
        [exec]
        [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings.
        [exec]
        [exec] +1 system tests framework. The patch passed system tests framework compile.

        Show
        sam rash added a comment - my apologies for the delay--i came down with a cold right before the long weekend results of test-patch: [exec] -1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] -1 tests included. The patch doesn't appear to include any new or modified tests. [exec] Please justify why no new tests are needed for this patch. [exec] Also please list what manual steps were performed to verify this patch. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] +1 system tests framework. The patch passed system tests framework compile.
        Hide
        Eli Collins added a comment -

        There's a "test-patch" target. You need to run it from an svn tree rather than a git tree. Here's a bash function that's useful for running it:

        function ant-test-patch() {
         ant -Dpatch.file=$1 \
           -Dforrest.home=$FORREST_HOME \
           -Dfindbugs.home=$FINDBUGS_HOME \
           -Djava5.home=$JAVA5_HOME \
           test-patch
        }
        
        Show
        Eli Collins added a comment - There's a "test-patch" target. You need to run it from an svn tree rather than a git tree. Here's a bash function that's useful for running it: function ant-test-patch() { ant -Dpatch.file=$1 \ -Dforrest.home=$FORREST_HOME \ -Dfindbugs.home=$FINDBUGS_HOME \ -Djava5.home=$JAVA5_HOME \ test-patch }
        Hide
        Tsz Wo Nicholas Sze added a comment -

        "ant test" is for running the unit tests
        "ant test-patch" is checking other stuffs like findbugs, javadoc, forrest, etc.

        Show
        Tsz Wo Nicholas Sze added a comment - "ant test" is for running the unit tests "ant test-patch" is checking other stuffs like findbugs, javadoc, forrest, etc.
        Hide
        sam rash added a comment -

        is that just

        ant test

        ?

        not familiar with "test-patch" ?

        Show
        sam rash added a comment - is that just ant test ? not familiar with "test-patch" ?
        Hide
        Tsz Wo Nicholas Sze added a comment -

        > I think Hudson is back as of a couple of days ago.

        I saw some Hudson report recently but it seems that it only works probabilistically.

        Sam, could you run test-patch and all the unit tests over your patch?

        Show
        Tsz Wo Nicholas Sze added a comment - > I think Hudson is back as of a couple of days ago. I saw some Hudson report recently but it seems that it only works probabilistically. Sam, could you run test-patch and all the unit tests over your patch?
        Hide
        Tsz Wo Nicholas Sze added a comment -

        +1 the new patch looks good. Thanks Sam.

        HADOOP-6907 is committed and Hudson is back. It is a good time to try submitting.

        Show
        Tsz Wo Nicholas Sze added a comment - +1 the new patch looks good. Thanks Sam. HADOOP-6907 is committed and Hudson is back. It is a good time to try submitting.
        Hide
        sam rash added a comment -

        create ClientDatanodeProtocol in try{} block so that we don't skip checking additional DNs on an exception

        Show
        sam rash added a comment - create ClientDatanodeProtocol in try{} block so that we don't skip checking additional DNs on an exception
        Hide
        sam rash added a comment -

        good point, i'll move the init into the try block

        Show
        sam rash added a comment - good point, i'll move the init into the try block
        Hide
        Konstantin Boudnik added a comment -

        I think Hudson is back as of a couple of days ago.

        Show
        Konstantin Boudnik added a comment - I think Hudson is back as of a couple of days ago.
        Hide
        Tsz Wo Nicholas Sze added a comment -
             for(DatanodeInfo datanode : locatedblock.getLocations()) {
        +      final ClientDatanodeProtocol cdp = DFSClient.createClientDatanodeProtocolProxy(
        +        datanode, dfsClient.conf, dfsClient.socketTimeout, locatedblock);
        +      
               try {
        -        final ClientDatanodeProtocol cdp = DFSClient.createClientDatanodeProtocolProxy(
        -            datanode, dfsClient.conf, dfsClient.socketTimeout, locatedblock);
        

        Found a problem: If createClientDatanodeProtocolProxy(..) throws an exception, it won't try the remaining datanodes. Sorry that I have not seen it earlier.

        Show
        Tsz Wo Nicholas Sze added a comment - for (DatanodeInfo datanode : locatedblock.getLocations()) { + final ClientDatanodeProtocol cdp = DFSClient.createClientDatanodeProtocolProxy( + datanode, dfsClient.conf, dfsClient.socketTimeout, locatedblock); + try { - final ClientDatanodeProtocol cdp = DFSClient.createClientDatanodeProtocolProxy( - datanode, dfsClient.conf, dfsClient.socketTimeout, locatedblock); Found a problem: If createClientDatanodeProtocolProxy(..) throws an exception, it won't try the remaining datanodes. Sorry that I have not seen it earlier.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Thanks Tanping for checking TestFileConcurrentReader.

        Sam, could you run test-patch and all the unit tests over your patch? Hudson seems not working recently.

        Show
        Tsz Wo Nicholas Sze added a comment - Thanks Tanping for checking TestFileConcurrentReader. Sam, could you run test-patch and all the unit tests over your patch? Hudson seems not working recently.
        Hide
        Tanping Wang added a comment -

        Thanks for Kos and Suresh's instructions, I have been able to install hadoop common artifact in my local maven repository and re-run the test, and it is passed. Thanks!

        Show
        Tanping Wang added a comment - Thanks for Kos and Suresh's instructions, I have been able to install hadoop common artifact in my local maven repository and re-run the test, and it is passed. Thanks!
        Hide
        Konstantin Boudnik added a comment -

        This approach is unlikely to work. You need to maven-install Common artifacts to your local MVN repo and then run HDFS build with -Dresolvers=internal

        Show
        Konstantin Boudnik added a comment - This approach is unlikely to work. You need to maven-install Common artifacts to your local MVN repo and then run HDFS build with -Dresolvers=internal
        Hide
        Tanping Wang added a comment -

        Hi, Sam,
        Based on your comments, this is what I have just tried,
        1) manually apply Hadoop-6907 patch to hadoop-common 22 trunk as it has not yet been committed into trunk. Built two jars
        hadoop-common-0.22.0-SNAPSHOT.jar
        and
        hadoop-common-test-0.22.0-SNAPSHOT.jar
        2) brute force inject these two jars into hadoop-hdf build dir to overwrite these jars that suppose to be pulled through ivy. Hope that HADOOP_6907 change to hadoop-common can be picked up this way.
        In detail,
        copy hadoop-common/build/hadoop-common-0.22.0-SNAPSHOT.jar hadoop-common/build/hadoop-common-test-0.22.0-SNAPSHOT.jar
        to
        hadoop-hdfs/build/ivy/lib/Hadoop-Hdfs/test/hadoop-common-0.22.0-SNAPSHOT.jar
        hadoop-hdfs/build/ivy/lib/Hadoop-Hdfs/test/hadoop-common-test-0.22.0-SNAPSHOT.jar
        hadoop-hdfs/build/ivy/lib/Hadoop-Hdfs/common/hadoop-common-0.22.0-SNAPSHOT.jar

        3) apply your hdfs patch

        Afterwards, I ran
        ant -Dtestcase=TestFileConcurrentReader run-test-hdfs-all-withtestcaseonly

        multiple times, but still getting the test failing.
        Am I missing anything?

        Show
        Tanping Wang added a comment - Hi, Sam, Based on your comments, this is what I have just tried, 1) manually apply Hadoop-6907 patch to hadoop-common 22 trunk as it has not yet been committed into trunk. Built two jars hadoop-common-0.22.0-SNAPSHOT.jar and hadoop-common-test-0.22.0-SNAPSHOT.jar 2) brute force inject these two jars into hadoop-hdf build dir to overwrite these jars that suppose to be pulled through ivy. Hope that HADOOP_6907 change to hadoop-common can be picked up this way. In detail, copy hadoop-common/build/hadoop-common-0.22.0-SNAPSHOT.jar hadoop-common/build/hadoop-common-test-0.22.0-SNAPSHOT.jar to hadoop-hdfs/build/ivy/lib/Hadoop-Hdfs/test/hadoop-common-0.22.0-SNAPSHOT.jar hadoop-hdfs/build/ivy/lib/Hadoop-Hdfs/test/hadoop-common-test-0.22.0-SNAPSHOT.jar hadoop-hdfs/build/ivy/lib/Hadoop-Hdfs/common/hadoop-common-0.22.0-SNAPSHOT.jar 3) apply your hdfs patch Afterwards, I ran ant -Dtestcase=TestFileConcurrentReader run-test-hdfs-all-withtestcaseonly multiple times, but still getting the test failing. Am I missing anything?
        Hide
        sam rash added a comment -

        note: you'll still see the hanging if the hadoop-common you use doesn't have HADOOP-6907.

        Show
        sam rash added a comment - note: you'll still see the hanging if the hadoop-common you use doesn't have HADOOP-6907 .
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Sam, this is a good catch.

        Tanping, could you check TestFileConcurrentReader again?

        Show
        Tsz Wo Nicholas Sze added a comment - Sam, this is a good catch. Tanping, could you check TestFileConcurrentReader again?
        Hide
        sam rash added a comment -

        Datanode RPC proxy created stopped properly

        Show
        sam rash added a comment - Datanode RPC proxy created stopped properly
        Hide
        sam rash added a comment -

        The problem is due to a couple of issues

        1. DFSInputStream was doing a Datanode RPC to get the block length of an in-progress file; it created a proxy object that was not shutdown
        2. even calling RPC.stopProxy() in DFSInputStream.readBlockLength() failed due to a faulty hashcode() function in ConnectionId. This will be fixed in https://issues.apache.org/jira/browse/HADOOP-6907

        In the meantime, there is one patch for hdfs, and hadoop common will also need HADOOP-6907 before this will not timeout

        I will upload the patch for hdfs shortly

        Show
        sam rash added a comment - The problem is due to a couple of issues 1. DFSInputStream was doing a Datanode RPC to get the block length of an in-progress file; it created a proxy object that was not shutdown 2. even calling RPC.stopProxy() in DFSInputStream.readBlockLength() failed due to a faulty hashcode() function in ConnectionId. This will be fixed in https://issues.apache.org/jira/browse/HADOOP-6907 In the meantime, there is one patch for hdfs, and hadoop common will also need HADOOP-6907 before this will not timeout I will upload the patch for hdfs shortly
        Hide
        sam rash added a comment -

        actually the 2nd two are likely fallout from the first--if it died and didn't cleanup the locks, this could happen.

        as i noted, i'm a bit short on time tonight, so I'll get to this tomorrow evening. fwiw, this looks familiar--the 'too many open files' with this unit test. i thought i already saw this and fixed it, though, where I simply didn't close a file in a thread...maybe only patched it in our local branch.

        thanks for the directly links to the results

        Show
        sam rash added a comment - actually the 2nd two are likely fallout from the first--if it died and didn't cleanup the locks, this could happen. as i noted, i'm a bit short on time tonight, so I'll get to this tomorrow evening. fwiw, this looks familiar--the 'too many open files' with this unit test. i thought i already saw this and fixed it, though, where I simply didn't close a file in a thread...maybe only patched it in our local branch. thanks for the directly links to the results
        Hide
        Tanping Wang added a comment -

        Hi, Sam, sorry about the confusion. In my last comment, I mentioned that I ran each one test case( out of the total seven tests in TestFileConcurrentReader), one by one, individually. .i.e. commenting other six tests out, but only leaving one test running each time. Each single one test got passed. However, if I run seven tests from TestFileConcurrentReader together, the last three ( sometimes, two) tests fail.
        The last three tests are testUnfinishedBlockCRCErrorTransferToVerySmallWrite, testUnfinishedBlockCRCErrorNormalTransfer and testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite.

        Please look at the Hudson result for reference,

        https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/413/testReport/org.apache.hadoop.hdfs/TestFileConcurrentReader/testUnfinishedBlockCRCErrorTransferToVerySmallWrite/
        https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/413/testReport/org.apache.hadoop.hdfs/TestFileConcurrentReader/testUnfinishedBlockCRCErrorNormalTransfer/
        https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/413/testReport/org.apache.hadoop.hdfs/TestFileConcurrentReader/testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite/

        The first test case failed because of "java.io.IOException: Too many open files."
        The next two ( sometimes one ) tests fail due to "Cannot lock storage ... The directory is already locked. "
        It seems to me that the test runs into race condition and does not release resources properly.

        Show
        Tanping Wang added a comment - Hi, Sam, sorry about the confusion. In my last comment, I mentioned that I ran each one test case( out of the total seven tests in TestFileConcurrentReader), one by one, individually . .i.e. commenting other six tests out, but only leaving one test running each time. Each single one test got passed. However, if I run seven tests from TestFileConcurrentReader together, the last three ( sometimes, two) tests fail. The last three tests are testUnfinishedBlockCRCErrorTransferToVerySmallWrite, testUnfinishedBlockCRCErrorNormalTransfer and testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite. Please look at the Hudson result for reference, https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/413/testReport/org.apache.hadoop.hdfs/TestFileConcurrentReader/testUnfinishedBlockCRCErrorTransferToVerySmallWrite/ https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/413/testReport/org.apache.hadoop.hdfs/TestFileConcurrentReader/testUnfinishedBlockCRCErrorNormalTransfer/ https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/413/testReport/org.apache.hadoop.hdfs/TestFileConcurrentReader/testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite/ The first test case failed because of "java.io.IOException: Too many open files." The next two ( sometimes one ) tests fail due to "Cannot lock storage ... The directory is already locked. " It seems to me that the test runs into race condition and does not release resources properly.
        Hide
        sam rash added a comment -

        sorry for the delay, I skimmed this jira and the last comment contradicted the title, so I assumed it was in ok shape.
        I will have a minute to look at this in more detail tomorrow night

        Show
        sam rash added a comment - sorry for the delay, I skimmed this jira and the last comment contradicted the title, so I assumed it was in ok shape. I will have a minute to look at this in more detail tomorrow night
        Hide
        Tanping Wang added a comment -

        I also ran each test from TestFileConcurrentReader test suite individually, i.e.

        testUnfinishedBlockRead
        testUnfinishedBlockPacketBufferOverrun
        testImmediateReadOfNewFile
        testUnfinishedBlockCRCErrorTransferTo
        testUnfinishedBlockCRCErrorTransferToVerySmallWrite
        testUnfinishedBlockCRCErrorNormalTransfer
        testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite

        every one of them passed with no problem.

        Show
        Tanping Wang added a comment - I also ran each test from TestFileConcurrentReader test suite individually, i.e. testUnfinishedBlockRead testUnfinishedBlockPacketBufferOverrun testImmediateReadOfNewFile testUnfinishedBlockCRCErrorTransferTo testUnfinishedBlockCRCErrorTransferToVerySmallWrite testUnfinishedBlockCRCErrorNormalTransfer testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite every one of them passed with no problem.
        Hide
        Konstantin Boudnik added a comment -

        Using sleep intervals in tests is suboptimal at best; using them to fix race-conditioned tests is pretty bad, I'd say.
        Besides, the test is JUnit v3 - an unsupported old version of the framework. We are using 4.5 at the moment. Last but not least, calling super.tearDown() doesn't make any sense at all because this is an empty method of the TestCase class.

        Loops like this

            while (!done) {
              try {
                Thread.sleep(1000);
              } catch (InterruptedException e) {
              }
              done = true;
              BlockLocation[] locations = fileSys.getFileBlockLocations(
                fileSys.getFileStatus(name), 0, blockSize);
              if (locations.length < 1) {
                done = false;
                continue;
              }
            }
        

        are begging for troubles and potential timeouts. There's no guarantee that its condition will ever be satisfied.

        I'd say the test needs to be refactored to JUnit v4.5 and then re-evaluated to see if the timeouts still occur.

        Show
        Konstantin Boudnik added a comment - Using sleep intervals in tests is suboptimal at best; using them to fix race-conditioned tests is pretty bad, I'd say. Besides, the test is JUnit v3 - an unsupported old version of the framework. We are using 4.5 at the moment. Last but not least, calling super.tearDown() doesn't make any sense at all because this is an empty method of the TestCase class. Loops like this while (!done) { try { Thread.sleep(1000); } catch (InterruptedException e) { } done = true; BlockLocation[] locations = fileSys.getFileBlockLocations( fileSys.getFileStatus(name), 0, blockSize); if (locations.length < 1) { done = false; continue; } } are begging for troubles and potential timeouts. There's no guarantee that its condition will ever be satisfied. I'd say the test needs to be refactored to JUnit v4.5 and then re-evaluated to see if the timeouts still occur.
        Hide
        Tanping Wang added a comment -

        In stead of putting in a sleep interval, removing super.tearDown() inside of tearDown() function itself makes the test case pass with Java1.6._12, Java1.6_15 and Java1.6_21.

        Show
        Tanping Wang added a comment - In stead of putting in a sleep interval, removing super.tearDown() inside of tearDown() function itself makes the test case pass with Java1.6._12, Java1.6_15 and Java1.6_21.
        Hide
        Tanping Wang added a comment -

        I ran this test case, TestFileConcurrentReader with Java 1.6.0_07 and this test stuck for ever on Yahoo hudson build box. I switched to Java 1.6.0_15 on the same build box, the test case passed with no problem. (Suspecting running into some race conditions in the test cases.) On the other hand, on my own Linux box, with the latest Java version, .1.6.0_21, initially, 3 out of 7 test cases failed, after putting in some sleep time between each test case, shown as below,

        protected void tearDown() throws Exception {
        cluster.shutdown();
        cluster = null;

        super.tearDown();
        // NEW: go to sleep
        try

        {Thread.sleep(3000);}

        catch(Exception e) {};
        }

        tests passed.

        Which java sub-version do we use for Hudson?

        Show
        Tanping Wang added a comment - I ran this test case, TestFileConcurrentReader with Java 1.6.0_07 and this test stuck for ever on Yahoo hudson build box. I switched to Java 1.6.0_15 on the same build box, the test case passed with no problem. (Suspecting running into some race conditions in the test cases.) On the other hand, on my own Linux box, with the latest Java version, .1.6.0_21, initially, 3 out of 7 test cases failed, after putting in some sleep time between each test case, shown as below, protected void tearDown() throws Exception { cluster.shutdown(); cluster = null; super.tearDown(); // NEW: go to sleep try {Thread.sleep(3000);} catch(Exception e) {}; } tests passed. Which java sub-version do we use for Hudson?
        Hide
        Suresh Srinivas added a comment -

        Todd or Hairong, can you please take a look at this failure

        Show
        Suresh Srinivas added a comment - Todd or Hairong, can you please take a look at this failure

          People

          • Assignee:
            sam rash
            Reporter:
            Suresh Srinivas
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development