Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-9910

TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class.

    Details

    • Hadoop Flags:
      Reviewed
    • Release Note:
      Add support for codec and cipher in HFilePerformanceEvaluation

      Description

      Today TestHFilePerformance and HFilePerformanceEvaluation are doing slightly different kind of performance tests both for the HFile. We should consider merging those 2 tests in a single class.

      1. HBASE-9910.patch
        28 kB
        Andrew Purtell
      2. HBASE-9910.patch
        10 kB
        Vikas Vishwakarma
      3. HBASE-9910-0.98.patch
        27 kB
        Andrew Purtell
      4. HBASE-9910-branch-1.patch
        27 kB
        Andrew Purtell

        Activity

        Hide
        enis Enis Soztutar added a comment -

        Closing this issue after 1.0.0 release.

        Show
        enis Enis Soztutar added a comment - Closing this issue after 1.0.0 release.
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #802 (See https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/802/)
        HBASE-9910 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class (Vikas Vishwakarma) (apurtell: rev 0e7b4655da9b09353c5513de4c0199d66429f403)

        • hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
        • hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #802 (See https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/802/ ) HBASE-9910 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class (Vikas Vishwakarma) (apurtell: rev 0e7b4655da9b09353c5513de4c0199d66429f403) hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in HBase-0.98 #844 (See https://builds.apache.org/job/HBase-0.98/844/)
        HBASE-9910 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class (Vikas Vishwakarma) (apurtell: rev 0e7b4655da9b09353c5513de4c0199d66429f403)

        • hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
        • hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in HBase-0.98 #844 (See https://builds.apache.org/job/HBase-0.98/844/ ) HBASE-9910 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class (Vikas Vishwakarma) (apurtell: rev 0e7b4655da9b09353c5513de4c0199d66429f403) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in HBase-1.1 #165 (See https://builds.apache.org/job/HBase-1.1/165/)
        HBASE-9910 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class (Vikas Vishwakarma) (apurtell: rev 8dd17e1ff87aa8170a83f75050046df4022b0866)

        • hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
        • hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in HBase-1.1 #165 (See https://builds.apache.org/job/HBase-1.1/165/ ) HBASE-9910 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class (Vikas Vishwakarma) (apurtell: rev 8dd17e1ff87aa8170a83f75050046df4022b0866) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in HBase-TRUNK #6114 (See https://builds.apache.org/job/HBase-TRUNK/6114/)
        HBASE-9910 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class (Vikas Vishwakarma) (apurtell: rev f9cf565f1ddcd9120fe26e5e92760662825f13f9)

        • hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
        • hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in HBase-TRUNK #6114 (See https://builds.apache.org/job/HBase-TRUNK/6114/ ) HBASE-9910 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class (Vikas Vishwakarma) (apurtell: rev f9cf565f1ddcd9120fe26e5e92760662825f13f9) hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in HBase-1.0 #726 (See https://builds.apache.org/job/HBase-1.0/726/)
        HBASE-9910 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class (Vikas Vishwakarma) (apurtell: rev d971edee1f74f8892f5d9e6226e8d1ba68ab7a7e)

        • hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
        • hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in HBase-1.0 #726 (See https://builds.apache.org/job/HBase-1.0/726/ ) HBASE-9910 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class (Vikas Vishwakarma) (apurtell: rev d971edee1f74f8892f5d9e6226e8d1ba68ab7a7e) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        yay ! Thanks Andrew Purtell

        Show
        vik.karma Vikas Vishwakarma added a comment - yay ! Thanks Andrew Purtell
        Hide
        apurtell Andrew Purtell added a comment -

        Attaching what I committed, which also removes the broken TestHFilePerformance

        Show
        apurtell Andrew Purtell added a comment - Attaching what I committed, which also removes the broken TestHFilePerformance
        Hide
        apurtell Andrew Purtell added a comment -

        Applied to master, branch-1 and 0.98 (with fixup), tested by hand, looks good. Committing shortly.

        Show
        apurtell Andrew Purtell added a comment - Applied to master, branch-1 and 0.98 (with fixup), tested by hand, looks good. Committing shortly.
        Hide
        apurtell Andrew Purtell added a comment -

        Those errors are probably not related to this patch Vikas Vishwakarma

        Show
        apurtell Andrew Purtell added a comment - Those errors are probably not related to this patch Vikas Vishwakarma
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        Does not look related to above commit, mostly test timeout and table creation failure in TestLoadIncrementalHFiles. Amount of HBase read/write ops does increase a lot with the above commit, not sure if it can cause some slowness. I can reduce the ROW_COUNT to reduce the HBase read/write ops with the above tests or comment out the AES/gz tests that can be run on need basis same as other codecs like snappy/lzo.

        testSimpleLoad(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles) Time elapsed: 60.08 sec <<< ERROR!
        java.lang.Exception: test timed out after 60000 milliseconds
        ..
        Tests run: 13, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 316.933 sec - in org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat
        Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 107.939 sec <<< FAILURE! - in org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles
        testSimpleLoad(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles) Time elapsed: 0.131 sec <<< ERROR!
        org.apache.hadoop.hbase.TableNotFoundException: Table 'mytable_testSimpleLoad' does not exist.
        ....
        Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 111.265 sec <<< FAILURE! - in org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles
        testSimpleLoad(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles) Time elapsed: 0.121 sec <<< ERROR!
        org.apache.hadoop.hbase.TableNotFoundException: Table 'mytable_testSimpleLoad' does not exist.
        ...

        Show
        vik.karma Vikas Vishwakarma added a comment - Does not look related to above commit, mostly test timeout and table creation failure in TestLoadIncrementalHFiles. Amount of HBase read/write ops does increase a lot with the above commit, not sure if it can cause some slowness. I can reduce the ROW_COUNT to reduce the HBase read/write ops with the above tests or comment out the AES/gz tests that can be run on need basis same as other codecs like snappy/lzo. testSimpleLoad(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles) Time elapsed: 60.08 sec <<< ERROR! java.lang.Exception: test timed out after 60000 milliseconds .. Tests run: 13, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 316.933 sec - in org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 107.939 sec <<< FAILURE! - in org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles testSimpleLoad(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles) Time elapsed: 0.131 sec <<< ERROR! org.apache.hadoop.hbase.TableNotFoundException: Table 'mytable_testSimpleLoad' does not exist. .... Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 111.265 sec <<< FAILURE! - in org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles testSimpleLoad(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles) Time elapsed: 0.121 sec <<< ERROR! org.apache.hadoop.hbase.TableNotFoundException: Table 'mytable_testSimpleLoad' does not exist. ...
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12697316/HBASE-9910.patch
        against master branch at commit eea9873ceff60381d50799994e260e8319ee68a7.
        ATTACHMENT ID: 12697316

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 checkstyle. The applied patch does not increase the total number of checkstyle errors

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles

        -1 core zombie tests. There are 1 zombie test(s):

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/checkstyle-aggregate.html

        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12697316/HBASE-9910.patch against master branch at commit eea9873ceff60381d50799994e260e8319ee68a7. ATTACHMENT ID: 12697316 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 checkstyle . The applied patch does not increase the total number of checkstyle errors +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles -1 core zombie tests . There are 1 zombie test(s): Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12736//console This message is automatically generated.
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        Submitted a patch with 2.0.0 and suppressing the CodecPool INFO logs

        Show
        vik.karma Vikas Vishwakarma added a comment - Submitted a patch with 2.0.0 and suppressing the CodecPool INFO logs
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        Submitting the patch for 2.0.0, also suppressing the the INFO logs for CodecPool

        Show
        vik.karma Vikas Vishwakarma added a comment - Submitting the patch for 2.0.0, also suppressing the the INFO logs for CodecPool
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        Submitting the patch for 2.0.0, also suppressing the the INFO logs for CodecPool

        Show
        vik.karma Vikas Vishwakarma added a comment - Submitting the patch for 2.0.0, also suppressing the the INFO logs for CodecPool
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        Hi Nick,

        I will set the compress.CodecPool to WARN. From what I read, if native codec is installed properly and available, it will be used directly and only if it is not available java codec will come into picture. So this will be handled automatically without any code change, right ?

        Show
        vik.karma Vikas Vishwakarma added a comment - Hi Nick, I will set the compress.CodecPool to WARN. From what I read, if native codec is installed properly and available, it will be used directly and only if it is not available java codec will come into picture. So this will be handled automatically without any code change, right ?
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        going to submit a new 2.0.0 patch which will also suppress the compress.CodecPool verbose INFO logging

        Show
        vik.karma Vikas Vishwakarma added a comment - going to submit a new 2.0.0 patch which will also suppress the compress.CodecPool verbose INFO logging
        Hide
        ndimiduk Nick Dimiduk added a comment -

        Another issue I am running into while adding codec tests is this one related HBASE-5881 because of which codecpool is not being used and the output gets flooded with info messages like this.

        I've run into this as well, while doing local testing. Better to use a native GZ impl for real performance testing anyway, right? Logger can be configured to WARN for the relevant classes (IIRC, there's a Hadoop class as well that needs silenced).

        Show
        ndimiduk Nick Dimiduk added a comment - Another issue I am running into while adding codec tests is this one related HBASE-5881 because of which codecpool is not being used and the output gets flooded with info messages like this. I've run into this as well, while doing local testing. Better to use a native GZ impl for real performance testing anyway, right? Logger can be configured to WARN for the relevant classes (IIRC, there's a Hadoop class as well that needs silenced).
        Hide
        apurtell Andrew Purtell added a comment -

        Another issue I am running into while adding codec tests is this one related HBASE-5881 because of which codecpool is not being used and the output gets flooded with info messages like this.

        Flooding the logger will perturb perf results. Consider programatically changing the logger configuration for this tool to suppress INFO level logging from CodecPool.

        Show
        apurtell Andrew Purtell added a comment - Another issue I am running into while adding codec tests is this one related HBASE-5881 because of which codecpool is not being used and the output gets flooded with info messages like this. Flooding the logger will perturb perf results. Consider programatically changing the logger configuration for this tool to suppress INFO level logging from CodecPool.
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        Another issue I am running into while adding codec tests is this one related HBASE-5881 because of which codecpool is not being used and the output gets flooded with info messages like this. I have added test summary which will be printed at the end to counter this, but will that be acceptable ?

        2015-01-26 13:15:18,643 INFO [0] compress.CodecPool: Got brand-new decompressor [.gz]
        2015-01-26 13:15:18,643 INFO [0] compress.CodecPool: Got brand-new decompressor [.gz]
        2015-01-26 13:15:18,643 INFO [0] compress.CodecPool: Got brand-new decompressor [.gz]
        2015-01-26 13:15:18,643 INFO [0] compress.CodecPool: Got brand-new decompressor [.gz]
        2015-01-26 13:15:18,643 INFO [0] compress.CodecPool: Got brand-new decompressor [.gz]
        ....

        Show
        vik.karma Vikas Vishwakarma added a comment - Another issue I am running into while adding codec tests is this one related HBASE-5881 because of which codecpool is not being used and the output gets flooded with info messages like this. I have added test summary which will be printed at the end to counter this, but will that be acceptable ? 2015-01-26 13:15:18,643 INFO [0] compress.CodecPool: Got brand-new decompressor [.gz] 2015-01-26 13:15:18,643 INFO [0] compress.CodecPool: Got brand-new decompressor [.gz] 2015-01-26 13:15:18,643 INFO [0] compress.CodecPool: Got brand-new decompressor [.gz] 2015-01-26 13:15:18,643 INFO [0] compress.CodecPool: Got brand-new decompressor [.gz] 2015-01-26 13:15:18,643 INFO [0] compress.CodecPool: Got brand-new decompressor [.gz] ....
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        fixed the StackOverflowError in HBASE-12917 and submitted a patch. Please review it, after this I will merge these changes in 2.0.0. HFilePerformanceEvaluation(2.0.0) looks to be running very slow compared to HFilePerformanceEvaluation(0.98.10).

        Show
        vik.karma Vikas Vishwakarma added a comment - fixed the StackOverflowError in HBASE-12917 and submitted a patch. Please review it, after this I will merge these changes in 2.0.0. HFilePerformanceEvaluation(2.0.0) looks to be running very slow compared to HFilePerformanceEvaluation(0.98.10).
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        ran into HBASE-12917 while trying HFilePerformanceEvaluation with main branch. The scan tests are failing with StackOverflowError.

        Show
        vik.karma Vikas Vishwakarma added a comment - ran into HBASE-12917 while trying HFilePerformanceEvaluation with main branch. The scan tests are failing with StackOverflowError.
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        ok .. thanks Andrew I will add that.

        regards,

        On Mon, Jan 26, 2015 at 6:59 AM, Andrew Purtell (JIRA) <jira@apache.org>

        Show
        vik.karma Vikas Vishwakarma added a comment - ok .. thanks Andrew I will add that. regards, On Mon, Jan 26, 2015 at 6:59 AM, Andrew Purtell (JIRA) <jira@apache.org>
        Hide
        apurtell Andrew Purtell added a comment -

        You'll need to make a patch against current HBase 'master' branch in order for Jenkins to be able to apply it for the precommit checks.

        Show
        apurtell Andrew Purtell added a comment - You'll need to make a patch against current HBase 'master' branch in order for Jenkins to be able to apply it for the precommit checks.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12694408/HBASE-9910.patch
        against master branch at commit a4be1b84e497f25fcae6209d3d82837b47785e05.
        ATTACHMENT ID: 12694408

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12580//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12694408/HBASE-9910.patch against master branch at commit a4be1b84e497f25fcae6209d3d82837b47785e05. ATTACHMENT ID: 12694408 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12580//console This message is automatically generated.
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        add support for codec and cipher in read and write tests

        Show
        vik.karma Vikas Vishwakarma added a comment - add support for codec and cipher in read and write tests
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        Failed with "The patch does not appear to apply with p0 to p2
        PATCH APPLICATION FAILED"

        Show
        vik.karma Vikas Vishwakarma added a comment - Failed with "The patch does not appear to apply with p0 to p2 PATCH APPLICATION FAILED"
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        add support to specify codec and cipher for read/write tests

        Show
        vik.karma Vikas Vishwakarma added a comment - add support to specify codec and cipher for read/write tests
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12694354/HBASE-9910.patch
        against master branch at commit 588b43b06ba9a3434dc2178b5b014283cc959d62.
        ATTACHMENT ID: 12694354

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12577//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12694354/HBASE-9910.patch against master branch at commit 588b43b06ba9a3434dc2178b5b014283cc959d62. ATTACHMENT ID: 12694354 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12577//console This message is automatically generated.
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        adding codec and cipher tests for HFile read/write tests

        Show
        vik.karma Vikas Vishwakarma added a comment - adding codec and cipher tests for HFile read/write tests
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        Hi JM,

        I gave it a try and submitting the initial cut for your feedback. Please review the submitted patch file and let me know your thoughts on this.
        I have added support to specify codec and cipher for HFile writer/reader tests and also included test combinations for gz codec and aes cipher similar to what is being done in TestHFilePerformance. For each combination writer will run once followed by all the readers.
        While looking at TestHFilePerformance, I realized that AES cipher tests were completely broken (HBASE-12866) so I have fixed that while including it in HFilePerformanceEvaluation
        Also I have added test summary and I am deleting all the test files at the end of the test.
        I have tested it locally using bin/start-hbase.sh;bin/hbase org.apache.hadoop.hbase.HFilePerformanceEvaluation

        Sample Test Result:
        ========================
        ***************
        Result Summary
        ***************

        Running SequentialWriteBenchmark with codec[none] cipher[none] for 1000000 rows took 761ms.
        Running UniformRandomSmallScan with codec[none] cipher[none] for 1000000 rows took 2271ms.
        Running UniformRandomReadBenchmark with codec[none] cipher[none] for 1000000 rows took 71717ms.
        Running GaussianRandomReadBenchmark with codec[none] cipher[none] for 1000000 rows took 79260ms.
        Running SequentialReadBenchmark with codec[none] cipher[none] for 1000000 rows took 227ms.
        Running SequentialWriteBenchmark with codec[gz] cipher[none] for 1000000 rows took 1025ms.
        Running UniformRandomSmallScan with codec[gz] cipher[none] for 1000000 rows took 15829ms.
        Running UniformRandomReadBenchmark with codec[gz] cipher[none] for 1000000 rows took 145314ms.
        Running GaussianRandomReadBenchmark with codec[gz] cipher[none] for 1000000 rows took 155687ms.
        Running SequentialReadBenchmark with codec[gz] cipher[none] for 1000000 rows took 434ms.
        Running SequentialWriteBenchmark with codec[none] cipher[aes] for 1000000 rows took 953ms.
        Running UniformRandomSmallScan with codec[none] cipher[aes] for 1000000 rows took 7113ms.
        Running UniformRandomReadBenchmark with codec[none] cipher[aes] for 1000000 rows took 121273ms.
        Running GaussianRandomReadBenchmark with codec[none] cipher[aes] for 1000000 rows took 134818ms.
        Running SequentialReadBenchmark with codec[none] cipher[aes] for 1000000 rows took 378ms.
        Running SequentialWriteBenchmark with codec[gz] cipher[aes] for 1000000 rows took 1187ms.
        Running UniformRandomSmallScan with codec[gz] cipher[aes] for 1000000 rows took 15546ms.
        Running UniformRandomReadBenchmark with codec[gz] cipher[aes] for 1000000 rows took 158620ms.
        Running GaussianRandomReadBenchmark with codec[gz] cipher[aes] for 1000000 rows took 176853ms.
        Running SequentialReadBenchmark with codec[gz] cipher[aes] for 1000000 rows took 506ms.

        Show
        vik.karma Vikas Vishwakarma added a comment - Hi JM, I gave it a try and submitting the initial cut for your feedback. Please review the submitted patch file and let me know your thoughts on this. I have added support to specify codec and cipher for HFile writer/reader tests and also included test combinations for gz codec and aes cipher similar to what is being done in TestHFilePerformance. For each combination writer will run once followed by all the readers. While looking at TestHFilePerformance, I realized that AES cipher tests were completely broken ( HBASE-12866 ) so I have fixed that while including it in HFilePerformanceEvaluation Also I have added test summary and I am deleting all the test files at the end of the test. I have tested it locally using bin/start-hbase.sh;bin/hbase org.apache.hadoop.hbase.HFilePerformanceEvaluation Sample Test Result: ======================== *************** Result Summary *************** Running SequentialWriteBenchmark with codec [none] cipher [none] for 1000000 rows took 761ms. Running UniformRandomSmallScan with codec [none] cipher [none] for 1000000 rows took 2271ms. Running UniformRandomReadBenchmark with codec [none] cipher [none] for 1000000 rows took 71717ms. Running GaussianRandomReadBenchmark with codec [none] cipher [none] for 1000000 rows took 79260ms. Running SequentialReadBenchmark with codec [none] cipher [none] for 1000000 rows took 227ms. Running SequentialWriteBenchmark with codec [gz] cipher [none] for 1000000 rows took 1025ms. Running UniformRandomSmallScan with codec [gz] cipher [none] for 1000000 rows took 15829ms. Running UniformRandomReadBenchmark with codec [gz] cipher [none] for 1000000 rows took 145314ms. Running GaussianRandomReadBenchmark with codec [gz] cipher [none] for 1000000 rows took 155687ms. Running SequentialReadBenchmark with codec [gz] cipher [none] for 1000000 rows took 434ms. Running SequentialWriteBenchmark with codec [none] cipher [aes] for 1000000 rows took 953ms. Running UniformRandomSmallScan with codec [none] cipher [aes] for 1000000 rows took 7113ms. Running UniformRandomReadBenchmark with codec [none] cipher [aes] for 1000000 rows took 121273ms. Running GaussianRandomReadBenchmark with codec [none] cipher [aes] for 1000000 rows took 134818ms. Running SequentialReadBenchmark with codec [none] cipher [aes] for 1000000 rows took 378ms. Running SequentialWriteBenchmark with codec [gz] cipher [aes] for 1000000 rows took 1187ms. Running UniformRandomSmallScan with codec [gz] cipher [aes] for 1000000 rows took 15546ms. Running UniformRandomReadBenchmark with codec [gz] cipher [aes] for 1000000 rows took 158620ms. Running GaussianRandomReadBenchmark with codec [gz] cipher [aes] for 1000000 rows took 176853ms. Running SequentialReadBenchmark with codec [gz] cipher [aes] for 1000000 rows took 506ms.
        Hide
        vik.karma Vikas Vishwakarma added a comment -

        adding codec and cipher tests for HFile read/write tests

        Show
        vik.karma Vikas Vishwakarma added a comment - adding codec and cipher tests for HFile read/write tests

          People

          • Assignee:
            vik.karma Vikas Vishwakarma
            Reporter:
            jmspaggi Jean-Marc Spaggiari
          • Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development