Details

    • Type: Sub-task
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.7.0
    • Component/s: test
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      The following are negative test cases for truncate.

      • new length > old length
      • truncating a directory
      • truncating a non-existing file
      • truncating a file without write permission
      • truncating a file opened for append
      • truncating a file in safemode

      Also add more truncate tests such as truncate with HA setup, truncate with other operations and multiple truncates.

      1. h7738_20150204.patch
        12 kB
        Tsz Wo Nicholas Sze
      2. h7738_20150205.patch
        23 kB
        Tsz Wo Nicholas Sze
      3. h7738_20150205b.patch
        32 kB
        Tsz Wo Nicholas Sze
      4. h7738_20150206.patch
        30 kB
        Tsz Wo Nicholas Sze
      5. h7738_20150206b.patch
        30 kB
        Tsz Wo Nicholas Sze
      6. h7738_20150206c.patch
        30 kB
        Tsz Wo Nicholas Sze

        Activity

        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -

        h7738_20150204.patch: adds the new tests and revises the exception message in FSNamesystem.recoverLeaseInternal(..).

        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - h7738_20150204.patch: adds the new tests and revises the exception message in FSNamesystem.recoverLeaseInternal(..).
        Hide
        hitliuyi Yi Liu added a comment - - edited

        I notice the patch distinguishes recover lease exception string for create/truncate/append file and recover lease, it's great, I also had the same thought, but the exception is still AlreadyBeingCreatedException, should we redefine it?
        Besides this, I'm +1 for the patch and pending Jenkins.

        Show
        hitliuyi Yi Liu added a comment - - edited I notice the patch distinguishes recover lease exception string for create/truncate/append file and recover lease, it's great, I also had the same thought, but the exception is still AlreadyBeingCreatedException , should we redefine it? Besides this, I'm +1 for the patch and pending Jenkins.
        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -

        Thanks for the quick review. Since AlreadyBeingCreatedException is in org.apache.hadoop.hdfs.protocol, changing is to something else is a bigger change which need more thought. I rather keep this patch simple. Sounds good?

        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - Thanks for the quick review. Since AlreadyBeingCreatedException is in org.apache.hadoop.hdfs.protocol, changing is to something else is a bigger change which need more thought. I rather keep this patch simple. Sounds good?
        Hide
        hitliuyi Yi Liu added a comment -

        I rather keep this patch simple. Sounds good?

        I'm OK for this. +1.

        Since AlreadyBeingCreatedException is in org.apache.hadoop.hdfs.protocol, changing is to something else is a bigger change which need more thought

        Yes, but actually AlreadyBeingCreatedException is only explicitly thrown by ClientProtocol#create . My thought is we can define a separate recover lease exception which extends IOException and let AlreadyBeingCreatedException extends it, otherwise people may see already being created exception when he does an append/truncate operation, that's odd. Of course, it's a minor improvement, we can also do it in a follow-on JIRA if you think it's necessary.

        Show
        hitliuyi Yi Liu added a comment - I rather keep this patch simple. Sounds good? I'm OK for this. +1. Since AlreadyBeingCreatedException is in org.apache.hadoop.hdfs.protocol, changing is to something else is a bigger change which need more thought Yes, but actually AlreadyBeingCreatedException is only explicitly thrown by ClientProtocol#create . My thought is we can define a separate recover lease exception which extends IOException and let AlreadyBeingCreatedException extends it, otherwise people may see already being created exception when he does an append/truncate operation, that's odd. Of course, it's a minor improvement, we can also do it in a follow-on JIRA if you think it's necessary.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12696653/h7738_20150204.patch
        against trunk revision 0b567f4.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.TestFileCreation

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9436//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9436//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12696653/h7738_20150204.patch against trunk revision 0b567f4. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestFileCreation Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9436//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9436//console This message is automatically generated.
        Hide
        shv Konstantin Shvachko added a comment - - edited

        Hey Nicholas, agreed more test cases is a good idea.
        Few comments on the patch:

        1. I would wrap the op parameter in recoverLeaseInternal() as enum rather than passing an arbitrary string.
        2. The if-esle statement in testBasicTruncate() can be replaced with a single assert
          assertEquals("File is expected to be closed only for truncates to the block boundary",
                       isReady, (toTruncate == 0 || newLength % BLOCK_SIZE == 0));
          

          I think comments in asserts are important.

        3. Why extra bracket blocks in testTruncateFailure()? I don't think freeing local variables worth it.
        4. In testTruncateFailure() you should probably handle InterruptedException rather than passing it through the test case.
        5. For safeMode check it would be logical to add it into TestSafeMode.testOperationsWhileInSafeMode(). All other operations are there.
        Show
        shv Konstantin Shvachko added a comment - - edited Hey Nicholas, agreed more test cases is a good idea. Few comments on the patch: I would wrap the op parameter in recoverLeaseInternal() as enum rather than passing an arbitrary string. The if-esle statement in testBasicTruncate() can be replaced with a single assert assertEquals( "File is expected to be closed only for truncates to the block boundary" , isReady, (toTruncate == 0 || newLength % BLOCK_SIZE == 0)); I think comments in asserts are important. Why extra bracket blocks in testTruncateFailure() ? I don't think freeing local variables worth it. In testTruncateFailure() you should probably handle InterruptedException rather than passing it through the test case. For safeMode check it would be logical to add it into TestSafeMode.testOperationsWhileInSafeMode() . All other operations are there.
        Hide
        shv Konstantin Shvachko added a comment -

        For truncate tests with HA it should be easy to add the case into TestHAAppend. Just create a second file there fileToTruncate, truncate it 5 times. The rest should be checked by fsck as in the test.
        Would you like to incorporate it in your patch or should we open another jira?

        Show
        shv Konstantin Shvachko added a comment - For truncate tests with HA it should be easy to add the case into TestHAAppend. Just create a second file there fileToTruncate , truncate it 5 times. The rest should be checked by fsck as in the test. Would you like to incorporate it in your patch or should we open another jira?
        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -

        h7738_20150205.patch:

        • adds RecoverLeaseOp;
        • uses assertEquals as suggested
        • changes DFSTestUtil.getFileSystemAs to not throwing InterruptedException
        • moves safemode test to TestSafeMode
        • adds truncate tests with HA

        Konstantin, thanks for the review. I incorporated all your comments except #3 since I like to reuse the variable names. Logically, they are separated test cases.

        For the HA test, it cannot truncate for 5 times unless truncating at block boundaries since the file is not ready (it is under recovery.)

        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - h7738_20150205.patch: adds RecoverLeaseOp; uses assertEquals as suggested changes DFSTestUtil.getFileSystemAs to not throwing InterruptedException moves safemode test to TestSafeMode adds truncate tests with HA Konstantin, thanks for the review. I incorporated all your comments except #3 since I like to reuse the variable names. Logically, they are separated test cases. For the HA test, it cannot truncate for 5 times unless truncating at block boundaries since the file is not ready (it is under recovery.)
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12696913/h7738_20150205.patch
        against trunk revision b77ff37.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 6 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.TestEncryptedTransfer
        org.apache.hadoop.hdfs.TestHFlush
        org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
        org.apache.hadoop.hdfs.TestFileCreation

        The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9446//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9446//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12696913/h7738_20150205.patch against trunk revision b77ff37. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestEncryptedTransfer org.apache.hadoop.hdfs.TestHFlush org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA org.apache.hadoop.hdfs.TestFileCreation The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9446//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9446//console This message is automatically generated.
        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -

        h7738_20150205b.patch

        • fixes TestFileCreation,
        • changes TestHAAppend to use multi threads,
        • add more tests, testMultipleTruncate and testTruncateWithOtherOperations.
        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - h7738_20150205b.patch fixes TestFileCreation, changes TestHAAppend to use multi threads, add more tests, testMultipleTruncate and testTruncateWithOtherOperations.
        Hide
        hadoopqa Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12696961/h7738_20150205b.patch
        against trunk revision 9d91069.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 7 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9453//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9453//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12696961/h7738_20150205b.patch against trunk revision 9d91069. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9453//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9453//console This message is automatically generated.
        Hide
        shv Konstantin Shvachko added a comment -
        1. RecoverLeaseOp should be static
        2. Unused import Assert in TestSafeMode
        3. It seems that all test cases of testMultipleTruncate() are already covered in testBasicTruncate(), and in deterministic way. I would remove it. Unless random truncates increase your confidence.
        4. TestHAAppend changes look like a complete refactoring of the test. It is not necessary, but would've been fine with me if it was not failing. Ran it several times, failed every time. It would be OK to move it to another jira if you wish. I did not expect so many changes.
        Show
        shv Konstantin Shvachko added a comment - RecoverLeaseOp should be static Unused import Assert in TestSafeMode It seems that all test cases of testMultipleTruncate() are already covered in testBasicTruncate() , and in deterministic way. I would remove it. Unless random truncates increase your confidence. TestHAAppend changes look like a complete refactoring of the test. It is not necessary, but would've been fine with me if it was not failing. Ran it several times, failed every time. It would be OK to move it to another jira if you wish. I did not expect so many changes.
        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -

        Thanks, Konstantin.

        1. Java enum is implicitly static, i.e. there is no non-static enum.
        2. done
        3. yes, it is better to have some random test. Let me decrease the number of blocks from 1000 to 100 so that the overhead is much smaller.
        4. Sure, let's use my previous version of the test. Will file a new JIRA for the new test. BTW, how it fails?

        Here is a new patch: h7738_20150206.patch

        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - Thanks, Konstantin. Java enum is implicitly static, i.e. there is no non-static enum. done yes, it is better to have some random test. Let me decrease the number of blocks from 1000 to 100 so that the overhead is much smaller. Sure, let's use my previous version of the test. Will file a new JIRA for the new test. BTW, how it fails? Here is a new patch: h7738_20150206.patch
        Hide
        shv Konstantin Shvachko added a comment -

        > it is better to have some random test.

        In general I don't like random tests because they imply intermittent failures, which are hard to reproduce and therefore fix.
        If the desire to have these is strong then they need sufficient logging information, which would describe the full conditions under which a failure occurs when it does.
        In this particular case your test is a subset of the existing test on its every run.
        Will look at the patch in a bit.

        Show
        shv Konstantin Shvachko added a comment - > it is better to have some random test. In general I don't like random tests because they imply intermittent failures, which are hard to reproduce and therefore fix. If the desire to have these is strong then they need sufficient logging information, which would describe the full conditions under which a failure occurs when it does. In this particular case your test is a subset of the existing test on its every run. Will look at the patch in a bit.
        Hide
        shv Konstantin Shvachko added a comment -

        TestHAAppend fails waiting on checkBlockRecovery() for me. Does it not fail for you?

        java.lang.AssertionError: inode should complete in ~30000 ms.
        Expected: is <true>
             but: was <false>
        	at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
        	at org.junit.Assert.assertThat(Assert.java:865)
        	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.checkBlockRecovery(TestFileTruncate.java:944)
        	at org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend.testMultipleAppendsDuringCatchupTailing(TestHAAppend.java:213)
        
        Show
        shv Konstantin Shvachko added a comment - TestHAAppend fails waiting on checkBlockRecovery() for me. Does it not fail for you? java.lang.AssertionError: inode should complete in ~30000 ms. Expected: is < true > but: was < false > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at org.junit.Assert.assertThat(Assert.java:865) at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.checkBlockRecovery(TestFileTruncate.java:944) at org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend.testMultipleAppendsDuringCatchupTailing(TestHAAppend.java:213)
        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -

        > In general I don't like random tests because they imply intermittent failures, ...

        If there are intermittent failures, it means that there are bugs either in the code or in the test. I guess what you don't like is poorly written random tests which may experience intermittent failures. For well written tests, it won't have intermittent failures.

        Why we need random tests? It is because the problem space is huge so that it is impossible to try all the cases. We have to do random sampling.

        testBasicTruncate, which is a well written test, does cover a lot of cases. However, it only tests a 12 bytes file with 3 blocks. Also, toTruncate is consecutive. For example, it does not test the case calling truncate to take out 10 blocks at once.

        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - > In general I don't like random tests because they imply intermittent failures, ... If there are intermittent failures, it means that there are bugs either in the code or in the test. I guess what you don't like is poorly written random tests which may experience intermittent failures. For well written tests, it won't have intermittent failures. Why we need random tests? It is because the problem space is huge so that it is impossible to try all the cases. We have to do random sampling. testBasicTruncate, which is a well written test, does cover a lot of cases. However, it only tests a 12 bytes file with 3 blocks. Also, toTruncate is consecutive. For example, it does not test the case calling truncate to take out 10 blocks at once.
        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -

        > TestHAAppend fails waiting on checkBlockRecovery() for me. Does it not fail for you?

        The machine you used is probably slow. It also passed the previous Jenkins run.

        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - > TestHAAppend fails waiting on checkBlockRecovery() for me. Does it not fail for you? The machine you used is probably slow. It also passed the previous Jenkins run.
        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -

        > If the desire to have these is strong then they need sufficient logging information, which would describe the full conditions under which a failure occurs when it does.

        We already have

        LOG.info("newLength=" + newLength + ", isReady=" + isReady);
        

        I think it is good enough. Agree?

        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - > If the desire to have these is strong then they need sufficient logging information, which would describe the full conditions under which a failure occurs when it does. We already have LOG.info( "newLength=" + newLength + ", isReady=" + isReady); I think it is good enough. Agree?
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12697079/h7738_20150206.patch
        against trunk revision 1425e3d.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 7 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.TestHFlush

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9460//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9460//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12697079/h7738_20150206.patch against trunk revision 1425e3d. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestHFlush Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9460//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9460//console This message is automatically generated.
        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -

        h7738_20150206b.patch: fixes TestHFlush.

        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - h7738_20150206b.patch: fixes TestHFlush.
        Hide
        shv Konstantin Shvachko added a comment -
        1. Since you kept TestHAAppend in the patch I'd suggest to simplify randomFilePartition(). Something like
            public static int[] randomFilePartition(int n, int parts) {
              assert n > parts : "n=" + n + " should exceed parts=" + parts;
              int[] p = new int[parts];
              for(int i=0, left=0, right=n-parts; i < parts; left=p[i], right++, i++) {
                p[i] = nextInt(right - left) + left;
              }
              return p;
            }
          
        2. We also need to log the partitions somewhere to be able to reproduce if something fails.
        3. The test is consistently passing now.

        > I think it is good enough. Agree?

        I will not argue. But I do not support your preoccupation with random tests, as they introduce non-determinism. Random mix of operations is very good as a stand alone application, which you can run overnight or for a few days. Such apps can be incorporated say with BigTop, but they make poor unit tests, imho.

        Show
        shv Konstantin Shvachko added a comment - Since you kept TestHAAppend in the patch I'd suggest to simplify randomFilePartition() . Something like public static int [] randomFilePartition( int n, int parts) { assert n > parts : "n=" + n + " should exceed parts=" + parts; int [] p = new int [parts]; for ( int i=0, left=0, right=n-parts; i < parts; left=p[i], right++, i++) { p[i] = nextInt(right - left) + left; } return p; } We also need to log the partitions somewhere to be able to reproduce if something fails. The test is consistently passing now. > I think it is good enough. Agree? I will not argue. But I do not support your preoccupation with random tests, as they introduce non-determinism. Random mix of operations is very good as a stand alone application, which you can run overnight or for a few days. Such apps can be incorporated say with BigTop, but they make poor unit tests, imho.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12697147/h7738_20150206b.patch
        against trunk revision da2fb2b.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 8 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.TestEncryptedTransfer
        org.apache.hadoop.hdfs.server.balancer.TestBalancer

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9470//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9470//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12697147/h7738_20150206b.patch against trunk revision da2fb2b. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestEncryptedTransfer org.apache.hadoop.hdfs.server.balancer.TestBalancer Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9470//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9470//console This message is automatically generated.
        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -
        1. "nextInt(right - left) + left;" does not work well since the first few partitions will have bigger sizes. nextInt is uniformly random.
        2. Sure
        3. That's great.

        h7738_20150206c.patch

        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - "nextInt(right - left) + left;" does not work well since the first few partitions will have bigger sizes. nextInt is uniformly random. Sure That's great. h7738_20150206c.patch
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12697209/h7738_20150206c.patch
        against trunk revision 8de80ff.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 8 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.cli.TestHDFSCLI

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9482//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9482//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12697209/h7738_20150206c.patch against trunk revision 8de80ff. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.cli.TestHDFSCLI Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9482//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9482//console This message is automatically generated.
        Hide
        shv Konstantin Shvachko added a comment -

        I agree your method of partitioning is more uniform. So it is just the logging.
        +1 on the patch

        Show
        shv Konstantin Shvachko added a comment - I agree your method of partitioning is more uniform. So it is just the logging. +1 on the patch
        Hide
        szetszwo Tsz Wo Nicholas Sze added a comment -

        Thanks Yi and Konstantin for reviewing the patches.

        I have committed this.

        Show
        szetszwo Tsz Wo Nicholas Sze added a comment - Thanks Yi and Konstantin for reviewing the patches. I have committed this.
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-trunk-Commit #7049 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7049/)
        HDFS-7738. Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7049 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7049/ ) HDFS-7738 . Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Yarn-trunk #832 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/832/)
        HDFS-7738. Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #832 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/832/ ) HDFS-7738 . Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #98 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/98/)
        HDFS-7738. Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7)

        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #98 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/98/ ) HDFS-7738 . Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Hdfs-trunk #2030 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2030/)
        HDFS-7738. Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #2030 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2030/ ) HDFS-7738 . Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #95 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/95/)
        HDFS-7738. Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #95 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/95/ ) HDFS-7738 . Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Mapreduce-trunk #2049 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2049/)
        HDFS-7738. Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7)

        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2049 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2049/ ) HDFS-7738 . Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #99 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/99/)
        HDFS-7738. Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #99 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/99/ ) HDFS-7738 . Revise the exception message for recover lease; add more truncate tests such as truncate with HA setup, negative tests, truncate with other operations and multiple truncates. (szetszwo: rev 8f7d4bb09f760780dd193c97796ebf4d22cfd2d7) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java

          People

          • Assignee:
            szetszwo Tsz Wo Nicholas Sze
            Reporter:
            szetszwo Tsz Wo Nicholas Sze
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development