HBase
  1. HBase
  2. HBASE-7404

Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.94.3
    • Fix Version/s: 0.95.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      BucketCache is another implementation of BlockCache which supports big block cache for high performance and would greatly decrease CMS and heap fragmentation in JVM caused by read activities.


      Usage:

      1.Use bucket cache as main memory cache, configured as the following:
      –"hbase.bucketcache.ioengine" "heap"
      –"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of max heap size)

      2.Use bucket cache as a secondary cache, configured as the following:
      –"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path where to store the block data)
      –"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 means 1GB)
      –"hbase.bucketcache.combinedcache.enabled" false (default value being true)
      Show
      BucketCache is another implementation of BlockCache which supports big block cache for high performance and would greatly decrease CMS and heap fragmentation in JVM caused by read activities. Usage: 1.Use bucket cache as main memory cache, configured as the following: –"hbase.bucketcache.ioengine" "heap" –"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of max heap size) 2.Use bucket cache as a secondary cache, configured as the following: –"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path where to store the block data) –"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 means 1GB) –"hbase.bucketcache.combinedcache.enabled" false (default value being true)
    • Tags:
      0.96notable

      Description

      First, thanks @neil from Fusion-IO share the source code.

      Usage:

      1.Use bucket cache as main memory cache, configured as the following:
      –"hbase.bucketcache.ioengine" "heap" (or "offheap" if using offheap memory to cache block )
      –"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of max heap size)

      2.Use bucket cache as a secondary cache, configured as the following:
      –"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path where to store the block data)
      –"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 means 1GB)
      –"hbase.bucketcache.combinedcache.enabled" false (default value being true)

      See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and org.apache.hadoop.hbase.io.hfile.bucket.BucketCache

      What's Bucket Cache?
      It could greatly decrease CMS and heap fragment by GC
      It support a large cache space for High Read Performance by using high speed disk like Fusion-io

      1.An implementation of block cache like LruBlockCache
      2.Self manage blocks' storage position through Bucket Allocator
      3.The cached blocks could be stored in the memory or file system
      4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), combined with LruBlockCache to decrease CMS and fragment by GC.
      5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to store block) to enlarge cache space

      How about SlabCache?
      We have studied and test SlabCache first, but the result is bad, because:
      1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds of block size, especially using DataBlockEncoding
      2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , it causes CMS and heap fragment don't get any better
      3.Direct heap performance is not good as heap, and maybe cause OOM, so we recommend using "heap" engine

      See more in the attachment and in the patch

      1. Introduction of Bucket Cache.pdf
        966 kB
        chunhui shen
      2. hbase-7404-trunkv9.patch
        130 kB
        chunhui shen
      3. hbase-7404-trunkv2.patch
        125 kB
        chunhui shen
      4. HBASE-7404-backport-0.94.patch
        152 kB
        Dave Latham
      5. hbase-7404-94v2.patch
        122 kB
        chunhui shen
      6. BucketCache.pdf
        292 kB
        chunhui shen
      7. 7404-trunk-v14.patch
        133 kB
        chunhui shen
      8. 7404-trunk-v13.txt
        133 kB
        chunhui shen
      9. 7404-trunk-v13.patch
        133 kB
        chunhui shen
      10. 7404-trunk-v12.patch
        131 kB
        chunhui shen
      11. 7404-trunk-v11.patch
        131 kB
        Ted Yu
      12. 7404-trunk-v10.patch
        131 kB
        Ted Yu
      13. 7404-0.94-fixed-lines.txt
        131 kB
        Lars Hofhansl

        Issue Links

          Activity

          Hide
          Ted Yu added a comment -

          In slide titled 'Test Results of First Usage', TPS is write request per second and QPS is read request per second

          Show
          Ted Yu added a comment - In slide titled 'Test Results of First Usage', TPS is write request per second and QPS is read request per second
          Hide
          ramkrishna.s.vasudevan added a comment -

          @Chunhui
          So you are back with a bang.
          Some terms in that test result YGC, YGCT? Sorry if am ignorant. Do you mean Young Generation here?

          Show
          ramkrishna.s.vasudevan added a comment - @Chunhui So you are back with a bang. Some terms in that test result YGC, YGCT? Sorry if am ignorant. Do you mean Young Generation here?
          Hide
          chunhui shen added a comment -

          ramkrishna.s.vasudevan
          Hoho, I'm here all the same!!
          Yes,
          YGC=Young Generation Count
          YGCT=Young Generation Total Time

          reviewboard
          https://reviews.apache.org/r/8717/

          Waiting for yours comments~~~

          Show
          chunhui shen added a comment - ramkrishna.s.vasudevan Hoho, I'm here all the same!! Yes, YGC=Young Generation Count YGCT=Young Generation Total Time reviewboard https://reviews.apache.org/r/8717/ Waiting for yours comments~~~
          Hide
          Andrew Purtell added a comment -

          Wow.

          So if using the "heap" engine this is an alternative or replacement for HBASE-4027 aka SlabCache? Second to last slide is results of "heap" engine tests, correct? Have you done any direct comparisons between this and the SlabCache?

          Show
          Andrew Purtell added a comment - Wow. So if using the "heap" engine this is an alternative or replacement for HBASE-4027 aka SlabCache? Second to last slide is results of "heap" engine tests, correct? Have you done any direct comparisons between this and the SlabCache?
          Hide
          chunhui shen added a comment -

          Andrew Purtell

          Second to last slide is results of "heap" engine tests, correct?

          Yes, it use "heap" engine.
          We have studied and test SlabCache first, but I think the result is bad, because:
          1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds of block size, especially using DataBlockEncoding
          2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , it causes CMS and heap fragment don't get any better
          3.Direct heap performance is not good as heap, and maybe cause OOM, so we doesn't do a test on "offheap" engine

          Show
          chunhui shen added a comment - Andrew Purtell Second to last slide is results of "heap" engine tests, correct? Yes, it use "heap" engine. We have studied and test SlabCache first, but I think the result is bad, because: 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds of block size, especially using DataBlockEncoding 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , it causes CMS and heap fragment don't get any better 3.Direct heap performance is not good as heap, and maybe cause OOM, so we doesn't do a test on "offheap" engine
          Hide
          chunhui shen added a comment -

          Fix a line bug when making patch.
          Could see that line from review board

          Show
          chunhui shen added a comment - Fix a line bug when making patch. Could see that line from review board
          Hide
          Sergey Shelukhin added a comment -

          I started reviewing... will publish several chunks of comments in review board cause I'm paranoid about losing them.

          Show
          Sergey Shelukhin added a comment - I started reviewing... will publish several chunks of comments in review board cause I'm paranoid about losing them.
          Hide
          Sergey Shelukhin added a comment -

          Sorry, got to about "freeSpace" in bucketCache, will continue later.

          Show
          Sergey Shelukhin added a comment - Sorry, got to about "freeSpace" in bucketCache, will continue later.
          Hide
          Sergey Shelukhin added a comment -

          got all the way to void multiple(long start, int len, byte[] array, int arrayOffset, Accessor accessor) {...
          I will continue next week hopefully.

          Show
          Sergey Shelukhin added a comment - got all the way to void multiple(long start, int len, byte[] array, int arrayOffset, Accessor accessor) {... I will continue next week hopefully.
          Hide
          chunhui shen added a comment -

          Uploading the introducion document for easy to understand bucket cache

          Show
          chunhui shen added a comment - Uploading the introducion document for easy to understand bucket cache
          Hide
          chunhui shen added a comment -

          Thanks Sergey for the review
          I will upload the new patch as the suggestion on review board

          Show
          chunhui shen added a comment - Thanks Sergey for the review I will upload the new patch as the suggestion on review board
          Hide
          chunhui shen added a comment -

          Attaching patchv9 as the review comments

          Show
          chunhui shen added a comment - Attaching patchv9 as the review comments
          Hide
          Ted Yu added a comment -

          For patch v9, I got the following compilation error:

          [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile (default-testCompile) on project hbase-server: Compilation failure
          [ERROR] /Users/tyu/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java:[303,30] illegal forward reference
          

          I think this is due to deserializerIdentifier being referenced in the CacheableDeserializer:

             private static final CacheableDeserializer<Cacheable> blockDeserializer =
          ...
          +        @Override
          +        public int getDeserialiserIdentifier() {
          +          return deserializerIdentifier;
          +        }
          ...
          +  private static final int deserializerIdentifier = CacheableDeserializerIdManager
          +      .registerDeserializer(blockDeserializer);
          
          Show
          Ted Yu added a comment - For patch v9, I got the following compilation error: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile ( default -testCompile) on project hbase-server: Compilation failure [ERROR] /Users/tyu/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java:[303,30] illegal forward reference I think this is due to deserializerIdentifier being referenced in the CacheableDeserializer: private static final CacheableDeserializer<Cacheable> blockDeserializer = ... + @Override + public int getDeserialiserIdentifier() { + return deserializerIdentifier; + } ... + private static final int deserializerIdentifier = CacheableDeserializerIdManager + .registerDeserializer(blockDeserializer);
          Hide
          Ted Yu added a comment -

          Patch v10 fixes the compilation error in HFileBlock.java and CacheTestUtils.java

          Show
          Ted Yu added a comment - Patch v10 fixes the compilation error in HFileBlock.java and CacheTestUtils.java
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12562846/7404-trunk-v10.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 5 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces lines longer than 100

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.io.TestHeapSize

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12562846/7404-trunk-v10.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 18 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 5 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces lines longer than 100 -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.io.TestHeapSize Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3794//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12562846/7404-trunk-v10.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 5 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces lines longer than 100

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.io.TestHeapSize

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12562846/7404-trunk-v10.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 18 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 5 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces lines longer than 100 -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.io.TestHeapSize Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3795//console This message is automatically generated.
          Hide
          Ted Yu added a comment -

          Patch v11 fixes TestHeapSize

          Show
          Ted Yu added a comment - Patch v11 fixes TestHeapSize
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12562848/7404-trunk-v11.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 5 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces lines longer than 100

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.replication.TestReplication
          org.apache.hadoop.hbase.util.TestHBaseFsck
          org.apache.hadoop.hbase.client.TestHCM
          org.apache.hadoop.hbase.client.TestFromClientSide

          -1 core zombie tests. There are 1 zombie test(s): at org.apache.hadoop.hbase.io.encoding.TestUpgradeFromHFileV1ToEncoding.testUpgrade(TestUpgradeFromHFileV1ToEncoding.java:83)

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12562848/7404-trunk-v11.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 18 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 5 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces lines longer than 100 -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.util.TestHBaseFsck org.apache.hadoop.hbase.client.TestHCM org.apache.hadoop.hbase.client.TestFromClientSide -1 core zombie tests . There are 1 zombie test(s): at org.apache.hadoop.hbase.io.encoding.TestUpgradeFromHFileV1ToEncoding.testUpgrade(TestUpgradeFromHFileV1ToEncoding.java:83) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3796//console This message is automatically generated.
          Hide
          chunhui shen added a comment -

          Thanks Ted for modifying the errors.
          Attaching patchV12 to fix javadoc warning and long line warning

          Show
          chunhui shen added a comment - Thanks Ted for modifying the errors. Attaching patchV12 to fix javadoc warning and long line warning
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12562905/7404-trunk-v12.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.regionserver.TestSplitTransaction

          -1 core zombie tests. There are 1 zombie test(s): at org.apache.hadoop.hbase.master.TestOpenedRegionHandler.testOpenedRegionHandlerOnMasterRestart(TestOpenedRegionHandler.java:104)

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12562905/7404-trunk-v12.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 18 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestSplitTransaction -1 core zombie tests . There are 1 zombie test(s): at org.apache.hadoop.hbase.master.TestOpenedRegionHandler.testOpenedRegionHandlerOnMasterRestart(TestOpenedRegionHandler.java:104) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3801//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12562848/7404-trunk-v11.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 5 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces lines longer than 100

          -1 core tests. The patch failed these unit tests:

          -1 core zombie tests. There are 1 zombie test(s): at org.apache.hadoop.hbase.io.encoding.TestUpgradeFromHFileV1ToEncoding.testUpgrade(TestUpgradeFromHFileV1ToEncoding.java:83)

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12562848/7404-trunk-v11.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 18 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 5 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces lines longer than 100 -1 core tests . The patch failed these unit tests: -1 core zombie tests . There are 1 zombie test(s): at org.apache.hadoop.hbase.io.encoding.TestUpgradeFromHFileV1ToEncoding.testUpgrade(TestUpgradeFromHFileV1ToEncoding.java:83) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3800//console This message is automatically generated.
          Hide
          chunhui shen added a comment -

          Attaching newest patchv13 as review comments

          Show
          chunhui shen added a comment - Attaching newest patchv13 as review comments
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12563050/7404-trunk-v13.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 core tests. The patch failed these unit tests:

          -1 core zombie tests. There are 1 zombie test(s): at org.apache.hadoop.hbase.io.encoding.TestUpgradeFromHFileV1ToEncoding.testUpgrade(TestUpgradeFromHFileV1ToEncoding.java:83)

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12563050/7404-trunk-v13.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 18 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 core tests . The patch failed these unit tests: -1 core zombie tests . There are 1 zombie test(s): at org.apache.hadoop.hbase.io.encoding.TestUpgradeFromHFileV1ToEncoding.testUpgrade(TestUpgradeFromHFileV1ToEncoding.java:83) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3820//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12563244/7404-trunk-v13.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12563244/7404-trunk-v13.txt against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 18 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3850//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12563244/7404-trunk-v13.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 core tests. The patch failed these unit tests:

          -1 core zombie tests. There are 2 zombie test(s): at org.apache.hadoop.hbase.io.encoding.TestUpgradeFromHFileV1ToEncoding.testUpgrade(TestUpgradeFromHFileV1ToEncoding.java:83)

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12563244/7404-trunk-v13.txt against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 18 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 core tests . The patch failed these unit tests: -1 core zombie tests . There are 2 zombie test(s): at org.apache.hadoop.hbase.io.encoding.TestUpgradeFromHFileV1ToEncoding.testUpgrade(TestUpgradeFromHFileV1ToEncoding.java:83) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3851//console This message is automatically generated.
          Hide
          chunhui shen added a comment -

          Addressing newest Ted's review comments in patchV14

          Show
          chunhui shen added a comment - Addressing newest Ted's review comments in patchV14
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12563398/7404-trunk-v14.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 core tests. The patch passed unit tests in .

          -1 core zombie tests. There are 1 zombie test(s):

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12563398/7404-trunk-v14.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 18 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 core tests . The patch passed unit tests in . -1 core zombie tests . There are 1 zombie test(s): Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3874//console This message is automatically generated.
          Hide
          Sergey Shelukhin added a comment -

          +1 on latest patch

          Show
          Sergey Shelukhin added a comment - +1 on latest patch
          Hide
          Ted Yu added a comment -

          +1 from me.

          Show
          Ted Yu added a comment - +1 from me.
          Hide
          Ted Yu added a comment -

          @Chunhui:
          This is an important feature. Please fill out Release Notes so that users know how to use it.

          Show
          Ted Yu added a comment - @Chunhui: This is an important feature. Please fill out Release Notes so that users know how to use it.
          Hide
          chunhui shen added a comment -

          OK, I will attach the usage

          Show
          chunhui shen added a comment - OK, I will attach the usage
          Hide
          chunhui shen added a comment -

          Thanks for the review, Ted, Sergey

          Commit to trunk tomorrow if no objection.

          Show
          chunhui shen added a comment - Thanks for the review, Ted, Sergey Commit to trunk tomorrow if no objection.
          Hide
          Hudson added a comment -

          Integrated in HBase-TRUNK #3739 (See https://builds.apache.org/job/HBase-TRUNK/3739/)
          HBASE-7404 Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE (Chunhui) (Revision 1432797)

          Result = FAILURE
          zjushch :
          Files :

          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferArray.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/Cacheable.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheableDeserializer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheableDeserializerIdManager.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocatorException.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCacheStats.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/ByteBufferIOEngine.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/CacheFullException.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/CachedEntryQueue.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/IOEngine.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/UniqueIndexMap.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestByteBufferIOEngine.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestFileIOEngine.java
          Show
          Hudson added a comment - Integrated in HBase-TRUNK #3739 (See https://builds.apache.org/job/HBase-TRUNK/3739/ ) HBASE-7404 Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE (Chunhui) (Revision 1432797) Result = FAILURE zjushch : Files : /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferArray.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/Cacheable.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheableDeserializer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheableDeserializerIdManager.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocatorException.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCacheStats.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/ByteBufferIOEngine.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/CacheFullException.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/CachedEntryQueue.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/IOEngine.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/UniqueIndexMap.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestByteBufferIOEngine.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestFileIOEngine.java
          Hide
          Hudson added a comment -

          Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #346 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/346/)
          HBASE-7404 Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE (Chunhui) (Revision 1432797)

          Result = FAILURE
          zjushch :
          Files :

          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferArray.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/Cacheable.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheableDeserializer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheableDeserializerIdManager.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocatorException.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCacheStats.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/ByteBufferIOEngine.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/CacheFullException.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/CachedEntryQueue.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/IOEngine.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/UniqueIndexMap.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestByteBufferIOEngine.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestFileIOEngine.java
          Show
          Hudson added a comment - Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #346 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/346/ ) HBASE-7404 Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE (Chunhui) (Revision 1432797) Result = FAILURE zjushch : Files : /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferArray.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/Cacheable.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheableDeserializer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheableDeserializerIdManager.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocatorException.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCacheStats.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/ByteBufferIOEngine.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/CacheFullException.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/CachedEntryQueue.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/IOEngine.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/UniqueIndexMap.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestByteBufferIOEngine.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestFileIOEngine.java
          Hide
          Rishit Shroff added a comment -

          ChunHui Shen:

          Thanks for this contribution. I was looking at the presentation docs and I had couple of questions:
          1. For the test configs, can you tell what were the size configs?? In terms of size of the blocks/buckets(default 2MB?) and cache itself.
          2. Did it use the same idea of SINGLE/MULTITOUCH blocks as we have in LRU Block Cache while doing the eviction??

          Show
          Rishit Shroff added a comment - ChunHui Shen: Thanks for this contribution. I was looking at the presentation docs and I had couple of questions: 1. For the test configs, can you tell what were the size configs?? In terms of size of the blocks/buckets(default 2MB?) and cache itself. 2. Did it use the same idea of SINGLE/MULTITOUCH blocks as we have in LRU Block Cache while doing the eviction??
          Hide
          chunhui shen added a comment -

          In terms of size of the blocks/buckets(default 2MB?) and cache itself

          Using default block size (64K), buckets (2MB), cache size 0.4

          Did it use the same idea of SINGLE/MULTITOUCH blocks as we have in LRU Block Cache while doing the eviction??

          Yes

          Show
          chunhui shen added a comment - In terms of size of the blocks/buckets(default 2MB?) and cache itself Using default block size (64K), buckets (2MB), cache size 0.4 Did it use the same idea of SINGLE/MULTITOUCH blocks as we have in LRU Block Cache while doing the eviction?? Yes
          Hide
          Rishit Shroff added a comment -

          Thanks for the reply. I wanted the know the actual size value of the bucket cache if possible.

          Show
          Rishit Shroff added a comment - Thanks for the reply. I wanted the know the actual size value of the bucket cache if possible.
          Hide
          chunhui shen added a comment -

          the actual size value

          14G*0.4

          Show
          chunhui shen added a comment - the actual size value 14G*0.4
          Hide
          Rishit Shroff added a comment -

          Thanks! Any particular reason that it was used in combination with LRU Block Cache and not a replacement of it in the first use case??

          Show
          Rishit Shroff added a comment - Thanks! Any particular reason that it was used in combination with LRU Block Cache and not a replacement of it in the first use case??
          Hide
          chunhui shen added a comment -

          combination with LRU Block Cache

          Bloom block and index block will be accessed more frequently(especially for the random read) with a high hit ratio, nearly 100%. So putting them in LRU Block Cache could get better performance

          Show
          chunhui shen added a comment - combination with LRU Block Cache Bloom block and index block will be accessed more frequently(especially for the random read) with a high hit ratio, nearly 100%. So putting them in LRU Block Cache could get better performance
          Hide
          Jean-Marc Spaggiari added a comment -

          Any plan to backport that in 0.94?

          Show
          Jean-Marc Spaggiari added a comment - Any plan to backport that in 0.94?
          Hide
          Dave Latham added a comment -

          Here's a backport I did to 0.94 a few weeks back with intent of doing some benchmarking, but haven't had time to follow through with yet.

          Attaching as "HBASE-7404-backport-0.94.patch"

          Show
          Dave Latham added a comment - Here's a backport I did to 0.94 a few weeks back with intent of doing some benchmarking, but haven't had time to follow through with yet. Attaching as " HBASE-7404 -backport-0.94.patch"
          Hide
          Dave Latham added a comment -

          I should make clear that it's not well tested or even very carefully reviewed, but if someone else is looking at a port it might make a helpful reference point.

          Show
          Dave Latham added a comment - I should make clear that it's not well tested or even very carefully reviewed, but if someone else is looking at a port it might make a helpful reference point.
          Hide
          Ted Yu added a comment -

          @Dave:
          Thanks for the patch. I ran 0.94 test suite and result is green:

          Tests run: 1340, Failures: 0, Errors: 0, Skipped: 13

          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 1:08:08.967s

          Show
          Ted Yu added a comment - @Dave: Thanks for the patch. I ran 0.94 test suite and result is green: Tests run: 1340, Failures: 0, Errors: 0, Skipped: 13 [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1:08:08.967s
          Hide
          chunhui shen added a comment -

          backport-0.94.patch seems good.

          Using bucket cache as main memory cache in the case where block cache hit ratio is not very high(e.g. <= 80%), otherwise LRUBlockCache would be better.

          If use bucket cache as a L2 cache, it seems well now from our daily testing

          Show
          chunhui shen added a comment - backport-0.94.patch seems good. Using bucket cache as main memory cache in the case where block cache hit ratio is not very high(e.g. <= 80%), otherwise LRUBlockCache would be better. If use bucket cache as a L2 cache, it seems well now from our daily testing
          Hide
          Liang Lee added a comment -

          When I used this patch on my cluster ,I found that the randomRead speed is very slow which is about 70% with the speed before patch.
          I want to know waht is the problem?
          The HBase version is 0.94.2 and my cluster consists by one Master and three regionservers.
          The Heapszie is set to 4G while the machine memory is 8G .
          I use the first method,the ocnfiguration is as follow:
          hbase.bucketcache.ioengine heap
          hbase.bucketcache.size 0.4
          hbase.bucketcache.combinedcache.percentage 0.9f

          Thanks

          Show
          Liang Lee added a comment - When I used this patch on my cluster ,I found that the randomRead speed is very slow which is about 70% with the speed before patch. I want to know waht is the problem? The HBase version is 0.94.2 and my cluster consists by one Master and three regionservers. The Heapszie is set to 4G while the machine memory is 8G . I use the first method,the ocnfiguration is as follow: hbase.bucketcache.ioengine heap hbase.bucketcache.size 0.4 hbase.bucketcache.combinedcache.percentage 0.9f Thanks
          Hide
          chunhui shen added a comment -

          Liang Lee

          I think your meta block/index block hit ratio is not high as your setting.

          Please lower the configuration of hbase.bucketcache.combinedcache.percentage, and try again, thanks

          In additional, better to use LruCache for small heapsize

          Show
          chunhui shen added a comment - Liang Lee I think your meta block/index block hit ratio is not high as your setting. Please lower the configuration of hbase.bucketcache.combinedcache.percentage, and try again, thanks In additional, better to use LruCache for small heapsize
          Hide
          Liang Lee added a comment -

          OK ,Thanks ,I will have a try!

          Show
          Liang Lee added a comment - OK ,Thanks ,I will have a try!
          Hide
          Liang Lee added a comment -

          Thanks,Ted.
          I modified the configuration and the lruCacheSize is 1G ,the bucketCacheSize isv 1G.
          The Heapszie is set to 4G while the machine memory is 8G .
          I use PerformanceEvaluation tool based on hbase to test the Performance.
          Before this path, the randomRead speed is about 6hours ,which used 10 clients to read 1000W rows(which means each client read 100W rows).
          But After patch ,the randomRead speed is about 7hours,I want to know what's the matter?
          Thanks!

          Show
          Liang Lee added a comment - Thanks,Ted. I modified the configuration and the lruCacheSize is 1G ,the bucketCacheSize isv 1G. The Heapszie is set to 4G while the machine memory is 8G . I use PerformanceEvaluation tool based on hbase to test the Performance. Before this path, the randomRead speed is about 6hours ,which used 10 clients to read 1000W rows(which means each client read 100W rows). But After patch ,the randomRead speed is about 7hours,I want to know what's the matter? Thanks!
          Hide
          chunhui shen added a comment -

          Liang Lee
          I have sent you a mail about the usage and tests.

          Show
          chunhui shen added a comment - Liang Lee I have sent you a mail about the usage and tests.
          Hide
          Julian Zhou added a comment -

          Hi, chunhui shen, nice feature, I am going to enable and try on our cluster. May I consult about 2 question?
          1) the benchmark result in your document is based on pointing IOEngine to Fusion-IO SSD or common HDD?
          2) any intelligent dynamic switching logic between LRU block cache and bucket cache based on hit ratio or other I/O pattern?
          Thanks~

          Show
          Julian Zhou added a comment - Hi, chunhui shen , nice feature, I am going to enable and try on our cluster. May I consult about 2 question? 1) the benchmark result in your document is based on pointing IOEngine to Fusion-IO SSD or common HDD? 2) any intelligent dynamic switching logic between LRU block cache and bucket cache based on hit ratio or other I/O pattern? Thanks~
          Hide
          chunhui shen added a comment -

          based on pointing IOEngine to Fusion-IO SSD or common HDD?

          Fusion-IO

          any intelligent dynamic switching logic between LRU block cache and bucket cache based on hit ratio or other I/O pattern?

          Couldn't switch dynamically now.

          Show
          chunhui shen added a comment - based on pointing IOEngine to Fusion-IO SSD or common HDD? Fusion-IO any intelligent dynamic switching logic between LRU block cache and bucket cache based on hit ratio or other I/O pattern? Couldn't switch dynamically now.
          Hide
          Lars Hofhansl added a comment -

          Liang Lee Did you ever redo your test? Did you still find it is slower?

          Show
          Lars Hofhansl added a comment - Liang Lee Did you ever redo your test? Did you still find it is slower?
          Hide
          Jean-Marc Spaggiari added a comment -

          I'm currently running some perfs tests for Nicolas, but as soon as I'm done I will try this patch because I'm very interested to have it running on my cluster... I might be able to provide some feedback by the end of the week.

          Show
          Jean-Marc Spaggiari added a comment - I'm currently running some perfs tests for Nicolas, but as soon as I'm done I will try this patch because I'm very interested to have it running on my cluster... I might be able to provide some feedback by the end of the week.
          Hide
          chunhui shen added a comment -

          Jean-Marc Spaggiari
          If you take it as a main memory block cache, it is used to decrease CMS and heap fragment by GC rather than improve the performance.
          So it is not appropriate for the scenario where the block cache hit ratio is high (e.g. >= 80%).
          In other scenarios, its performance should be near with or a little higher than the LruBlockCache's.

          Show
          chunhui shen added a comment - Jean-Marc Spaggiari If you take it as a main memory block cache, it is used to decrease CMS and heap fragment by GC rather than improve the performance. So it is not appropriate for the scenario where the block cache hit ratio is high (e.g. >= 80%). In other scenarios, its performance should be near with or a little higher than the LruBlockCache's.
          Hide
          Wei Li added a comment -

          The default value of bucketCachePercentage is 0 currently, I suggest set it to hfile.block.cache.size if combinedWithLru is true.

          Show
          Wei Li added a comment - The default value of bucketCachePercentage is 0 currently, I suggest set it to hfile.block.cache.size if combinedWithLru is true.
          Hide
          Paul Baclace added a comment -

          The attached "Introduction of Bucket Cache" describes LruBlockCache, SlabCache, and BucketCache but only has advantages/disadvantages for the first two. Surely BucketCache makes engineering tradeoffs that can be documented. For instance, serialization overhead is present and if backed by a slow disk, performance would suffer.

          Show
          Paul Baclace added a comment - The attached "Introduction of Bucket Cache" describes LruBlockCache, SlabCache, and BucketCache but only has advantages/disadvantages for the first two. Surely BucketCache makes engineering tradeoffs that can be documented. For instance, serialization overhead is present and if backed by a slow disk, performance would suffer.
          Hide
          Jonathan Hsieh added a comment -

          I've filed HBASE-9131 – it would be great if someone who worked on this code added documentation at an admin-level to the ref guide.

          Show
          Jonathan Hsieh added a comment - I've filed HBASE-9131 – it would be great if someone who worked on this code added documentation at an admin-level to the ref guide.
          Hide
          stack added a comment -

          Marking closed.

          Show
          stack added a comment - Marking closed.
          Hide
          Lars Hofhansl added a comment -

          I tried the Dave's 0.94 backport. It looks good and it gets good results.
          Finally HBase can be usefully run on large memory machines and make use of all that RAM.

          I would like to review the backport a bit closer and then commit it in time for 0.94.13. Any objections? (it's mostly new code and I have seen no detriments with it disabled).

          Show
          Lars Hofhansl added a comment - I tried the Dave's 0.94 backport. It looks good and it gets good results. Finally HBase can be usefully run on large memory machines and make use of all that RAM. I would like to review the backport a bit closer and then commit it in time for 0.94.13. Any objections? (it's mostly new code and I have seen no detriments with it disabled).
          Hide
          Lars Hofhansl added a comment -

          The 0.94 patch had a bunch of CRLF line endings. Some of the patch threw me off. This version fixes that.

          Show
          Lars Hofhansl added a comment - The 0.94 patch had a bunch of CRLF line endings. Some of the patch threw me off. This version fixes that.
          Hide
          Lars Hofhansl added a comment -

          Filed HBASE-9680 for further discussion.

          Show
          Lars Hofhansl added a comment - Filed HBASE-9680 for further discussion.
          Hide
          Adrian Muraru added a comment -

          +1 on integrating this in 0.94 branch, thanks Lars.
          One comment regarding the configs: reading through the patch I find misleading the usage of "hbase.offheapcache.percentage" config

          Say I want to use the bucket-cache as a secondary, off-heap block cache

          if (offHeapCacheSize <= 0) {
          // hbase.bucketcache.ioengine="file://ramdisk/hbase"
          // hbase.bucketcache.combinedcache.enabled" false
          ... init BucketCache ...
          }
          

          hbase.offheapcache.percentage name is misleading with the addition on this block-cache.
          One suggestion would be to have it renamed to something like:
          hbase.slab_offheapcache or something.

          Show
          Adrian Muraru added a comment - +1 on integrating this in 0.94 branch, thanks Lars. One comment regarding the configs: reading through the patch I find misleading the usage of "hbase.offheapcache.percentage" config Say I want to use the bucket-cache as a secondary, off-heap block cache if (offHeapCacheSize <= 0) { // hbase.bucketcache.ioengine= "file://ramdisk/hbase" // hbase.bucketcache.combinedcache.enabled" false ... init BucketCache ... } hbase.offheapcache.percentage name is misleading with the addition on this block-cache. One suggestion would be to have it renamed to something like: hbase.slab_offheapcache or something.
          Hide
          Lars Hofhansl added a comment -

          Actually I'd rather remove the DoubleBlockCache code; apparently it has never worked right.

          So I did some simple performance tests (with the 0.94 version):

          • For (sequential) scanning I find the performance indistinguishable from the LruCache.
          • For pure random gets I find that setting hbase.bucketcache.size = 0.4 is 60% slower than just setting hfile.block.cache.size = 0.4. Is that expected? In both cases I see no cache misses.
          Show
          Lars Hofhansl added a comment - Actually I'd rather remove the DoubleBlockCache code; apparently it has never worked right. So I did some simple performance tests (with the 0.94 version): For (sequential) scanning I find the performance indistinguishable from the LruCache. For pure random gets I find that setting hbase.bucketcache.size = 0.4 is 60% slower than just setting hfile.block.cache.size = 0.4. Is that expected? In both cases I see no cache misses.
          Hide
          stack added a comment -

          Lars Hofhansl You seen HBASE-8894? Its an improvement on this patch. @liang xie has started looking into our Alex's patch over there. It might be worth waiting on his findings before you commit this to 0.94?

          Show
          stack added a comment - Lars Hofhansl You seen HBASE-8894 ? Its an improvement on this patch. @liang xie has started looking into our Alex's patch over there. It might be worth waiting on his findings before you commit this to 0.94?
          Hide
          stack added a comment -

          Actually I'd rather remove the DoubleBlockCache code; apparently it has never worked right.

          +1 Lets not have two ways of providing a feature. DBC didn't add value. Was as slow as getting block from os cache (so why use it).

          For pure random gets I find that setting hbase.bucketcache.size = 0.4 is 60% slower than just setting hfile.block.cache.size = 0.4. Is that expected? In both cases I see no cache misses.

          Maybe hbase-8894 will get them closer in perf (only bucket cache is off-heap)

          Good stuff

          Show
          stack added a comment - Actually I'd rather remove the DoubleBlockCache code; apparently it has never worked right. +1 Lets not have two ways of providing a feature. DBC didn't add value. Was as slow as getting block from os cache (so why use it). For pure random gets I find that setting hbase.bucketcache.size = 0.4 is 60% slower than just setting hfile.block.cache.size = 0.4. Is that expected? In both cases I see no cache misses. Maybe hbase-8894 will get them closer in perf (only bucket cache is off-heap) Good stuff
          Hide
          Lars Hofhansl added a comment - - edited

          Thanks Stack, will have a look at HBASE-8894. Since currently it's an either/or decision between LRU and BucketCache for data blocks, a 60% slowdown does not seem to be acceptable. I'll also do some more tests.

          Show
          Lars Hofhansl added a comment - - edited Thanks Stack, will have a look at HBASE-8894 . Since currently it's an either/or decision between LRU and BucketCache for data blocks, a 60% slowdown does not seem to be acceptable. I'll also do some more tests.
          Hide
          chunhui shen added a comment -

          bq,For pure random gets I find that setting hbase.bucketcache.size = 0.4 is 60% slower than just setting hfile.block.cache.size = 0.4. Is that expected?

          Maybe the usage of this feature is misunderstood.
          It's not used to replace LruBlockCache.
          Its function is reducing JVM GC under low cache hit ratio or creating a L2 cache for HBase

          Thus, for the above test case where cache hit ratio is near 100%, it will have worse performance than LruBlockCache. Becasue BucketCache will do a memory copy when hitting the block

          Show
          chunhui shen added a comment - bq,For pure random gets I find that setting hbase.bucketcache.size = 0.4 is 60% slower than just setting hfile.block.cache.size = 0.4. Is that expected? Maybe the usage of this feature is misunderstood. It's not used to replace LruBlockCache. Its function is reducing JVM GC under low cache hit ratio or creating a L2 cache for HBase Thus, for the above test case where cache hit ratio is near 100%, it will have worse performance than LruBlockCache. Becasue BucketCache will do a memory copy when hitting the block
          Hide
          Lars Hofhansl added a comment -

          Thanks chunhui shen. I do understand the aim of the patch.
          This does replace the LruCache when you enable it, right? (for all but meta blocks)
          I.e. you cannot use the BucketCache for some data blocks and the LruCache for some other data blocks. That is the reason why this cannot be generally enabled.

          If the bucket cache was in addition to the LruCache as a "cold" cache or L2 cache then it would be a different story (for example, say you have a machine with 128 or 256gb or ram, currently HBase cannot make use of that except for as OS buffer cache, if some of this memory could be given to the bucket cache while the LruCache would still be used as before we could always enable this). That appears to be the aim of HBASE-8894.

          Show
          Lars Hofhansl added a comment - Thanks chunhui shen . I do understand the aim of the patch. This does replace the LruCache when you enable it, right? (for all but meta blocks) I.e. you cannot use the BucketCache for some data blocks and the LruCache for some other data blocks. That is the reason why this cannot be generally enabled. If the bucket cache was in addition to the LruCache as a "cold" cache or L2 cache then it would be a different story (for example, say you have a machine with 128 or 256gb or ram, currently HBase cannot make use of that except for as OS buffer cache, if some of this memory could be given to the bucket cache while the LruCache would still be used as before we could always enable this). That appears to be the aim of HBASE-8894 .
          Hide
          Lars Hofhansl added a comment -

          Also... A 60% slowdown is expected here? (I want to make sure I did not misconfigure this). Thanks.

          Show
          Lars Hofhansl added a comment - Also... A 60% slowdown is expected here? (I want to make sure I did not misconfigure this). Thanks.
          Hide
          Jerry He added a comment -

          We'd like to have this in 0.94 too.
          On the other hand, just to confirm, if this feature is not configured/enabled, there is no impact to anything existing, correct?

          Show
          Jerry He added a comment - We'd like to have this in 0.94 too. On the other hand, just to confirm, if this feature is not configured/enabled, there is no impact to anything existing, correct?
          Hide
          Lars Hofhansl added a comment -

          That is correct.

          Show
          Lars Hofhansl added a comment - That is correct.
          Hide
          Jerry He added a comment -

          Any intention and activity putting this in 0.94?

          Show
          Jerry He added a comment - Any intention and activity putting this in 0.94?
          Hide
          Lars Hofhansl added a comment -

          Was waiting for HBASE-8894, but it seems that has some ways to go.
          The problem with this patch is that when you enable the off-heap bucketcache the blockcache is disabled for all but the index blocks. There is no in-between. So from that angle it is not very useful, because most folks couldn't just switch it on, since it is quite a bit slower than the blockcache.

          If it would be configurable per table/cf that would be another story. One could then have smaller, hot table still use the blockcache and have larger not-so-tables use the offheap cache; and thus we'd be able to make use of RAM sizes of 128gb or more.

          At the same time, since this is all new code, I'm fine with reviewing Dave's backport a bit more and then committing it.
          Comments?

          Show
          Lars Hofhansl added a comment - Was waiting for HBASE-8894 , but it seems that has some ways to go. The problem with this patch is that when you enable the off-heap bucketcache the blockcache is disabled for all but the index blocks. There is no in-between. So from that angle it is not very useful, because most folks couldn't just switch it on, since it is quite a bit slower than the blockcache. If it would be configurable per table/cf that would be another story. One could then have smaller, hot table still use the blockcache and have larger not-so-tables use the offheap cache; and thus we'd be able to make use of RAM sizes of 128gb or more. At the same time, since this is all new code, I'm fine with reviewing Dave's backport a bit more and then committing it. Comments?
          Hide
          Vladimir Rodionov added a comment -

          Although, I am not big fan of this implementation (BucketCache), I still think that nobody has actually tried it in a real applications - not in synthetic benchmark. Keeping INDEX and BLOOM blocks on heap and DATA blocks off heap is very reasonable approach, taking into account that DATA blocks takes ~95% of space and only 33% accesses (get INDEX, get BLOOM, get DATA - correct?). Therefore 2/3 of ALL block cache requests must be served from fast on heap cache. Deserialiization of serialized block is limited only by memory bandwidth and even with modest 1GB per sec per CPU core we can get 15K blocks per sec per CPU core. Definitely, not a bottleneck if one takes into account HBase network stack limitations as well.

          Show
          Vladimir Rodionov added a comment - Although, I am not big fan of this implementation (BucketCache), I still think that nobody has actually tried it in a real applications - not in synthetic benchmark. Keeping INDEX and BLOOM blocks on heap and DATA blocks off heap is very reasonable approach, taking into account that DATA blocks takes ~95% of space and only 33% accesses (get INDEX, get BLOOM, get DATA - correct?). Therefore 2/3 of ALL block cache requests must be served from fast on heap cache. Deserialiization of serialized block is limited only by memory bandwidth and even with modest 1GB per sec per CPU core we can get 15K blocks per sec per CPU core. Definitely, not a bottleneck if one takes into account HBase network stack limitations as well.
          Hide
          Liang Xie added a comment -

          We(xiaomi) had ported it into our 0.94 branch and run in several latency sensltive clusters for several months already

          Show
          Liang Xie added a comment - We(xiaomi) had ported it into our 0.94 branch and run in several latency sensltive clusters for several months already
          Hide
          Lars Hofhansl added a comment - - edited

          I'm not opposed backporting this if there is interest. I'm wrestling with large memory machines myself currently. This is in 0.96 and later already; and this is almost completely new code with very little risk to existing functionality.

          Liang Xie, is your patch the same as the one posted here by Dave Latham? If not, mind adding refreshed patch?
          If you had some performance numbers that would be great too.

          I'd also be curious how this fares performance wise against just using the OS buffer cache. I.e. how does serialization from the OS cache compare to serialization from the bucket cache.
          And - to preempt any comments to that extend - I realize that the bucket cache provides more flexibility than the OS buffer cache, which indiscriminately caches blocks (unless we use fadvise hints, etc)

          Show
          Lars Hofhansl added a comment - - edited I'm not opposed backporting this if there is interest. I'm wrestling with large memory machines myself currently. This is in 0.96 and later already; and this is almost completely new code with very little risk to existing functionality. Liang Xie , is your patch the same as the one posted here by Dave Latham ? If not, mind adding refreshed patch? If you had some performance numbers that would be great too. I'd also be curious how this fares performance wise against just using the OS buffer cache. I.e. how does serialization from the OS cache compare to serialization from the bucket cache. And - to preempt any comments to that extend - I realize that the bucket cache provides more flexibility than the OS buffer cache, which indiscriminately caches blocks (unless we use fadvise hints, etc)
          Hide
          Lars Hofhansl added a comment -

          Vladimir Rodionov You are right for point request (GETs). For scans, however, the performance penalty is substantial (less that 1/2 the scan performance when I measured last, but I can check again).

          Show
          Lars Hofhansl added a comment - Vladimir Rodionov You are right for point request (GETs). For scans, however, the performance penalty is substantial (less that 1/2 the scan performance when I measured last, but I can check again).
          Hide
          stack added a comment -

          Vladimir Rodionov "..Although, I am not big fan of this implementation (BucketCache),..."

          What issues do you have w/ it V? Out of interest? I'm starting to dig in here. Thanks boss.

          Show
          stack added a comment - Vladimir Rodionov "..Although, I am not big fan of this implementation (BucketCache),..." What issues do you have w/ it V? Out of interest? I'm starting to dig in here. Thanks boss.
          Hide
          Vladimir Rodionov added a comment -

          What issues do you have w/ it V? Out of interest? I'm starting to dig in here. Thanks boss.

          It keeps block keys on heap. It evicts blocks in batches. It does not support compression. Disk-based (SSD) cache is not SSD-friendly. It does not do both: RAM and Disk at the same time.

          Show
          Vladimir Rodionov added a comment - What issues do you have w/ it V? Out of interest? I'm starting to dig in here. Thanks boss. It keeps block keys on heap. It evicts blocks in batches. It does not support compression. Disk-based (SSD) cache is not SSD-friendly. It does not do both: RAM and Disk at the same time.
          Hide
          Vladimir Rodionov added a comment -

          For scans, however, the performance penalty is substantial (less that 1/2 the scan performance when I measured last, but I can check again).

          I think, for scans w/o filters (and skips) the performance should be comparable. The more skips we have in a scanner - the less performance we will get from off heap - based cache, due to the obvious deserialization overhead ... but this holds only if ALL your data fits block cache. This is not the case for application in production, usually. To de-serialize 64K block takes less than 30 microseconds (2GB per sec). To fetch the same block from HDD local - 10ms, from SSD - 0.5-1ms. Therefore, when your data does not fit comfortably into LruBlockCache (10-30GB), I think off heap has a huge advantage.

          Show
          Vladimir Rodionov added a comment - For scans, however, the performance penalty is substantial (less that 1/2 the scan performance when I measured last, but I can check again). I think, for scans w/o filters (and skips) the performance should be comparable. The more skips we have in a scanner - the less performance we will get from off heap - based cache, due to the obvious deserialization overhead ... but this holds only if ALL your data fits block cache. This is not the case for application in production, usually. To de-serialize 64K block takes less than 30 microseconds (2GB per sec). To fetch the same block from HDD local - 10ms, from SSD - 0.5-1ms. Therefore, when your data does not fit comfortably into LruBlockCache (10-30GB), I think off heap has a huge advantage.
          Hide
          stack added a comment -

          Thanks Vladimir Rodionov

          It keeps block keys on heap.

          You'd have the indices offheap too?

          It evicts blocks in batches.

          What you thinking instead boss?

          It does not support compression.

          <ugly buzzer sound>Unnggghhhh</ugly buzzer sound> Let me looksee if can fix...

          Disk-based (SSD) cache is not SSD-friendly

          The access pattern wears?

          It does not do both: RAM and Disk at the same time.

          This is the Lars Hofhansl suggestion that we have LRUBlockCache OR BucketCache based off a table configuration? Or you thinking L1/L2 layout?

          Thanks boss.

          Show
          stack added a comment - Thanks Vladimir Rodionov It keeps block keys on heap. You'd have the indices offheap too? It evicts blocks in batches. What you thinking instead boss? It does not support compression. <ugly buzzer sound>Unnggghhhh</ugly buzzer sound> Let me looksee if can fix... Disk-based (SSD) cache is not SSD-friendly The access pattern wears? It does not do both: RAM and Disk at the same time. This is the Lars Hofhansl suggestion that we have LRUBlockCache OR BucketCache based off a table configuration? Or you thinking L1/L2 layout? Thanks boss.
          Hide
          Vladimir Rodionov added a comment -

          Yes, keys needs to be off-heaped as well to allow scaling well beyond 100G, real-time eviction is better (but harder) for latency sensitive applications, SSD-friendliness means seq writes and much lower latency variations as opposed to random writes & worse access consistency, yes - L1/L2 layout is what I am thinking about, stack.

          Show
          Vladimir Rodionov added a comment - Yes, keys needs to be off-heaped as well to allow scaling well beyond 100G, real-time eviction is better (but harder) for latency sensitive applications, SSD-friendliness means seq writes and much lower latency variations as opposed to random writes & worse access consistency, yes - L1/L2 layout is what I am thinking about, stack .
          Hide
          Liang Xie added a comment -

          is your patch the same as the one posted here by Dave Latham? If not, mind adding refreshed patch?

          our ported stuff was against internal 0.94.3 branch, but i guess there should be no difference with Dave's, since most of the 7404's changes are new files

          If you had some performance numbers that would be great too.

          we decided to port since the biggest latency contributor in our several clusters is gc, after porting this jira and with lots of vm tuning, the total gc cost each day decreased from [2000,3000]s to [300,500]s, then the top contributor of 99th percentile latency isn't gc any more i think the ported stuff should contribute about [200,400]ms reduction probably at least, i have forgot the detail number, several months ago, you know

          I agree that if we don't have a gc trouble, then no benefit from here, unless we want to run on a fast flash, it's not my scenario.

          Show
          Liang Xie added a comment - is your patch the same as the one posted here by Dave Latham? If not, mind adding refreshed patch? our ported stuff was against internal 0.94.3 branch, but i guess there should be no difference with Dave's, since most of the 7404's changes are new files If you had some performance numbers that would be great too. we decided to port since the biggest latency contributor in our several clusters is gc, after porting this jira and with lots of vm tuning, the total gc cost each day decreased from [2000,3000] s to [300,500] s, then the top contributor of 99th percentile latency isn't gc any more i think the ported stuff should contribute about [200,400] ms reduction probably at least, i have forgot the detail number, several months ago, you know I agree that if we don't have a gc trouble, then no benefit from here, unless we want to run on a fast flash, it's not my scenario.
          Hide
          Liyin Tang added a comment -

          Liang, just curious, what's the top contributor for the p99 latency in your case ?

          Show
          Liyin Tang added a comment - Liang, just curious, what's the top contributor for the p99 latency in your case ?
          Hide
          Liang Xie added a comment -

          Liyin Tang, so far as i know, most are them related with hdfs layer, the existing unfinished jiras(1. hedged read; 2 two wal writers) should be expected to alleviate somehow, but still have other works need to do, e.g. HDFS io qos or request priority(HDFS-5727). i am a late entry to the hbase+hdfs percentile latency tuning game, but i have an intuitive sense of still having lots of improvement space.

          Show
          Liang Xie added a comment - Liyin Tang , so far as i know, most are them related with hdfs layer, the existing unfinished jiras(1. hedged read; 2 two wal writers) should be expected to alleviate somehow, but still have other works need to do, e.g. HDFS io qos or request priority( HDFS-5727 ). i am a late entry to the hbase+hdfs percentile latency tuning game, but i have an intuitive sense of still having lots of improvement space.

            People

            • Assignee:
              chunhui shen
              Reporter:
              chunhui shen
            • Votes:
              7 Vote for this issue
              Watchers:
              52 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development