Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-10403 Simplify offheap cache configuration
  3. HBASE-11520

Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache"

    Details

    • Type: Sub-task
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 0.99.0
    • Fix Version/s: 0.99.0, 2.0.0
    • Component/s: io
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      Remove "hbase.bucketcache.percentage.in.combinedcache". Simplifies config of block cache. If you are using this config., after this patch goes in, it will be ignored. The L1 LruBlockCache will be whatever hfile.block.cache.size is set to and the L2 BucketCache will be whatever hbase.bucketcache.size is set to.
      Show
      Remove "hbase.bucketcache.percentage.in.combinedcache". Simplifies config of block cache. If you are using this config., after this patch goes in, it will be ignored. The L1 LruBlockCache will be whatever hfile.block.cache.size is set to and the L2 BucketCache will be whatever hbase.bucketcache.size is set to.

      Description

      Remove "hbase.bucketcache.percentage.in.combinedcache". It is unnecessary complication of block cache config. Let L1 config setup be as it is whether a L2 present or not, just set hfile.block.cache.size (not hbase.bucketcache.size * (1.0 - hbase.bucketcache.percentage.in.combinedcache)). For L2, let hbase.bucketcache.size be the actual size of the bucket cache, not hbase.bucketcache.size * hbase.bucketcache.percentage.in.combinedcache.

      Attached patch removes the config. and updates docs. Adds tests to confirm configs are as expected whether a CombinedBlockCache deploy or a strict L1+L2 deploy.

      1. 11520.txt
        17 kB
        stack
      2. 11520v2.txt
        18 kB
        stack
      3. 11520v3.txt
        24 kB
        stack
      4. 11520v3.txt
        24 kB
        stack

        Activity

        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Ambari-trunk-Commit #2804 (See https://builds.apache.org/job/Ambari-trunk-Commit/2804/)
        AMBARI-11552. 2.3 stack advisor doesn't take into account HBASE-11520 (Nick Dimiduk via srimanth) (sgunturi: http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=aeccbc7fe458509241e16c47f653f65a6ed8c2e4)

        • ambari-server/src/test/python/stacks/2.3/common/test_stack_advisor.py
        • ambari-server/src/main/resources/stacks/HDP/2.2/services/stack_advisor.py
        • ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py
        • ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Ambari-trunk-Commit #2804 (See https://builds.apache.org/job/Ambari-trunk-Commit/2804/ ) AMBARI-11552 . 2.3 stack advisor doesn't take into account HBASE-11520 (Nick Dimiduk via srimanth) (sgunturi: http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=aeccbc7fe458509241e16c47f653f65a6ed8c2e4 ) ambari-server/src/test/python/stacks/2.3/common/test_stack_advisor.py ambari-server/src/main/resources/stacks/HDP/2.2/services/stack_advisor.py ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py
        Hide
        enis Enis Soztutar added a comment -

        Closing this issue after 0.99.0 release.

        Show
        enis Enis Soztutar added a comment - Closing this issue after 0.99.0 release.
        Hide
        ndimiduk Nick Dimiduk added a comment -

        You ok w/ it?

        Yeah, I'm okay with this. You leave things better than when you found them.

        Show
        ndimiduk Nick Dimiduk added a comment - You ok w/ it? Yeah, I'm okay with this. You leave things better than when you found them.
        Hide
        stack stack added a comment -

        lruCache.setVictimCache is called only when bucketCacheIOEngineName != null && bucketCacheSize > 0.

        ... and combinedWithLru is not set, yeah.

        combinedWithLru is used to size the lrucache based on the value of bucketCacheSize, calculated from hbase.bucketcache.size.

        No. Not in the last version of this patch. LruBC size is detached from bucketCacheSizing.

        So setting hbase.bucketcache.combinedcache.enabled=false, hbase.bucketcache.ioengine not null .... with a victimCache set but no way to access the blocks from the bucketcache.

        No (as per your later comment).

        Setting bucketCacheIOEngineName != null && bucketCacheSize > 0 but hbase.bucketcache.combinedcache.enabled=false results in GLOBAL_BLOCK_CACHE_INSTANCE=LruBlockCache and a surprising amount of wasted memory.

        The bucket cache will be set into LruBC as victimHandler.

        So really, setting hbase.bucketcache.combinedcache.enabled=false means "cache data blocks in Lru first" while =true means "data blocks never go to lru, only bucket".

        Yeah. Seems like a simple story to tell. You ok w/ it? Let me write up the L1+L2 deploy type in case folks interested and put in manual. Let me see if I can test too. Probably would be better with a policy in place that loaded data blocks to L2 first and only up into L1 if accessed a few times rather than as it is, loading to L1 always and then to L2.

        The unit tests added to TestCacheConfig try to confirm that these deploy options are layed-out as we expect.

        Thanks for looking.

        Show
        stack stack added a comment - lruCache.setVictimCache is called only when bucketCacheIOEngineName != null && bucketCacheSize > 0. ... and combinedWithLru is not set, yeah. combinedWithLru is used to size the lrucache based on the value of bucketCacheSize, calculated from hbase.bucketcache.size. No. Not in the last version of this patch. LruBC size is detached from bucketCacheSizing. So setting hbase.bucketcache.combinedcache.enabled=false, hbase.bucketcache.ioengine not null .... with a victimCache set but no way to access the blocks from the bucketcache. No (as per your later comment). Setting bucketCacheIOEngineName != null && bucketCacheSize > 0 but hbase.bucketcache.combinedcache.enabled=false results in GLOBAL_BLOCK_CACHE_INSTANCE=LruBlockCache and a surprising amount of wasted memory. The bucket cache will be set into LruBC as victimHandler. So really, setting hbase.bucketcache.combinedcache.enabled=false means "cache data blocks in Lru first" while =true means "data blocks never go to lru, only bucket". Yeah. Seems like a simple story to tell. You ok w/ it? Let me write up the L1+L2 deploy type in case folks interested and put in manual. Let me see if I can test too. Probably would be better with a policy in place that loaded data blocks to L2 first and only up into L1 if accessed a few times rather than as it is, loading to L1 always and then to L2. The unit tests added to TestCacheConfig try to confirm that these deploy options are layed-out as we expect. Thanks for looking.
        Hide
        ndimiduk Nick Dimiduk added a comment -

        Given this latter, the package-info as it is in the patch is right by my reckoning. From LruBlockCache

        Oh. I thought the only place BucketCache was entering the picture was via CombinedBlockCache:

          public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching,
              boolean repeat, boolean updateCacheMetrics) {
            // TODO: is there a hole here, or just awkwardness since in the lruCache getBlock
            // we end up calling bucketCache.getBlock.
            if (lruCache.containsBlock(cacheKey)) {
              return lruCache.getBlock(cacheKey, caching, repeat, updateCacheMetrics);
            }
            return bucketCache.getBlock(cacheKey, caching, repeat, updateCacheMetrics);
          }
        

        I didn't grock your meaning of LruBlockCache reaching into it's victimHandler at read time. So really, setting hbase.bucketcache.combinedcache.enabled=false means "cache data blocks in Lru first" while =true means "data blocks never go to lru, only bucket".

        Show
        ndimiduk Nick Dimiduk added a comment - Given this latter, the package-info as it is in the patch is right by my reckoning. From LruBlockCache Oh. I thought the only place BucketCache was entering the picture was via CombinedBlockCache: public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { // TODO: is there a hole here, or just awkwardness since in the lruCache getBlock // we end up calling bucketCache.getBlock. if (lruCache.containsBlock(cacheKey)) { return lruCache.getBlock(cacheKey, caching, repeat, updateCacheMetrics); } return bucketCache.getBlock(cacheKey, caching, repeat, updateCacheMetrics); } I didn't grock your meaning of LruBlockCache reaching into it's victimHandler at read time. So really, setting hbase.bucketcache.combinedcache.enabled=false means "cache data blocks in Lru first" while =true means "data blocks never go to lru, only bucket".
        Hide
        ndimiduk Nick Dimiduk added a comment -

        On the test, offheap plus BUCKET_CACHE_COMBINED_KEY == false is a legit set up. This is how you ask for an L1+L2 that is not 'combined'.

        Reading through the logic again.

        lruCache.setVictimCache is called only when bucketCacheIOEngineName != null && bucketCacheSize > 0. combinedWithLru is used to size the lrucache based on the value of bucketCacheSize, calculated from hbase.bucketcache.size. So setting hbase.bucketcache.combinedcache.enabled=false, hbase.bucketcache.ioengine not null, and hbase.bucketcache.size not 0 results in GLOBAL_BLOCK_CACHE_INSTANCE=LruBlockCache with a victimCache set but no way to access the blocks from the bucketcache.

        Another observation: the only way to have a BucketCache instance accessible via GLOBAL_BLOCK_CACHE_INSTANCE is via a CombinedCache. Setting bucketCacheIOEngineName != null && bucketCacheSize > 0 but hbase.bucketcache.combinedcache.enabled=false results in GLOBAL_BLOCK_CACHE_INSTANCE=LruBlockCache and a surprising amount of wasted memory.

        I think the only valid use of the bucketcache is when combinedcache.enabled=true.

        Am I missing something?

        Show
        ndimiduk Nick Dimiduk added a comment - On the test, offheap plus BUCKET_CACHE_COMBINED_KEY == false is a legit set up. This is how you ask for an L1+L2 that is not 'combined'. Reading through the logic again. lruCache.setVictimCache is called only when bucketCacheIOEngineName != null && bucketCacheSize > 0 . combinedWithLru is used to size the lrucache based on the value of bucketCacheSize , calculated from hbase.bucketcache.size . So setting hbase.bucketcache.combinedcache.enabled=false , hbase.bucketcache.ioengine not null, and hbase.bucketcache.size not 0 results in GLOBAL_BLOCK_CACHE_INSTANCE=LruBlockCache with a victimCache set but no way to access the blocks from the bucketcache. Another observation: the only way to have a BucketCache instance accessible via GLOBAL_BLOCK_CACHE_INSTANCE is via a CombinedCache . Setting bucketCacheIOEngineName != null && bucketCacheSize > 0 but hbase.bucketcache.combinedcache.enabled=false results in GLOBAL_BLOCK_CACHE_INSTANCE=LruBlockCache and a surprising amount of wasted memory. I think the only valid use of the bucketcache is when combinedcache.enabled=true. Am I missing something?
        Hide
        apurtell Andrew Purtell added a comment -

        We can catch that OOME out of HMaster.constructMaster and print a message providing guidance and reference somewhere in the online manual.

        Show
        apurtell Andrew Purtell added a comment - We can catch that OOME out of HMaster.constructMaster and print a message providing guidance and reference somewhere in the online manual.
        Hide
        stack stack added a comment -

        We should probably drop config'ing direct max memory in hbase-env.sh since in jdk7 there is no limit (to be confirmed).

        Show
        stack stack added a comment - We should probably drop config'ing direct max memory in hbase-env.sh since in jdk7 there is no limit (to be confirmed).
        Hide
        stack stack added a comment -

        Here is what happens if you set a bucketcache size > allocated direct memory:

        2014-07-16 14:03:02,081 INFO  [main] zookeeper.ZooKeeper: Initiating client connection, connectString=c2020.halxg.cloudera.com:2181 sessionTimeout=90000 watcher=master:16020, quorum=c2020.halxg.cloudera.com:2181, baseZNode=/hbase
        2014-07-16 14:03:02,309 ERROR [main] master.HMasterCommandLine: Master exiting
        java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
                at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1796)
                at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:194)
                at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
                at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
                at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
                at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1810)
        Caused by: java.lang.OutOfMemoryError: Direct buffer memory
                at java.nio.Bits.reserveMemory(Bits.java:658)
                at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
                at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
                at org.apache.zookeeper.ClientCnxnSocket.<init>(ClientCnxnSocket.java:51)
                at org.apache.zookeeper.ClientCnxnSocketNIO.<init>(ClientCnxnSocketNIO.java:48)
                at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
                at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
                at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
                at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
                at java.lang.Class.newInstance(Class.java:374)
                at org.apache.zookeeper.ZooKeeper.getClientCnxnSocket(ZooKeeper.java:1779)
                at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:447)
                at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
                at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:112)
                at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:132)
                at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:164)
                at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:502)
                at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:267)
                at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
                at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
                at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
                at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
                at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1791)
                ... 5 more
        
        Show
        stack stack added a comment - Here is what happens if you set a bucketcache size > allocated direct memory: 2014-07-16 14:03:02,081 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=c2020.halxg.cloudera.com:2181 sessionTimeout=90000 watcher=master:16020, quorum=c2020.halxg.cloudera.com:2181, baseZNode=/hbase 2014-07-16 14:03:02,309 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1796) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:194) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1810) Caused by: java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:658) at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) at org.apache.zookeeper.ClientCnxnSocket.<init>(ClientCnxnSocket.java:51) at org.apache.zookeeper.ClientCnxnSocketNIO.<init>(ClientCnxnSocketNIO.java:48) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at java.lang. Class .newInstance( Class .java:374) at org.apache.zookeeper.ZooKeeper.getClientCnxnSocket(ZooKeeper.java:1779) at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:447) at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:112) at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:132) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:164) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:502) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:267) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1791) ... 5 more
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in HBase-1.0 #47 (See https://builds.apache.org/job/HBase-1.0/47/)
        HBASE-11520 Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache" (stack: rev 14b331ccab8ef18b00ff7b4cf9127e22c74c4bca)

        • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
        • src/main/docbkx/book.xml
        • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/package-info.java
        • hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java
        • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in HBase-1.0 #47 (See https://builds.apache.org/job/HBase-1.0/47/ ) HBASE-11520 Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache" (stack: rev 14b331ccab8ef18b00ff7b4cf9127e22c74c4bca) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java src/main/docbkx/book.xml hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/package-info.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in HBase-TRUNK #5311 (See https://builds.apache.org/job/HBase-TRUNK/5311/)
        HBASE-11520 Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache" (stack: rev 8a481b87b57035aca9f6ff2833104eb073e2e889)

        • src/main/docbkx/book.xml
        • hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java
        • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/package-info.java
        • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
        • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          Add a upgrading to 1.0 section to the upgrade part of the book; talk about HBASE-11520 removing hbase.bucketcache.percentage.in.combinedcache (stack: rev a99b71da5774500af5d72b47dcfd7a7cf2a9eb00)
        • src/main/docbkx/upgrading.xml
        Show
        hudson Hudson added a comment - FAILURE: Integrated in HBase-TRUNK #5311 (See https://builds.apache.org/job/HBase-TRUNK/5311/ ) HBASE-11520 Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache" (stack: rev 8a481b87b57035aca9f6ff2833104eb073e2e889) src/main/docbkx/book.xml hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/package-info.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java Add a upgrading to 1.0 section to the upgrade part of the book; talk about HBASE-11520 removing hbase.bucketcache.percentage.in.combinedcache (stack: rev a99b71da5774500af5d72b47dcfd7a7cf2a9eb00) src/main/docbkx/upgrading.xml
        Hide
        anoop.hbase Anoop Sam John added a comment -

        Resizeable CBC would be great though I'd say resizing an offheap BC is probably low priority;

        My plan is for onheap part only. The L1 any way can be resized. (In case of combine cache, it wont resize as of today)
        One more is in CombinedCache when both L1 and L2 are onheap, the resize will be applied for the L1 only.

        Just one more thing came to mind now.
        We do have checks on cluster start that the sum of memstore size and block cache size can be max 80%. When BucketCache with onheap is used, it can take more than 80% memory!!.. We must change this checking logic also.

        Show
        anoop.hbase Anoop Sam John added a comment - Resizeable CBC would be great though I'd say resizing an offheap BC is probably low priority; My plan is for onheap part only. The L1 any way can be resized. (In case of combine cache, it wont resize as of today) One more is in CombinedCache when both L1 and L2 are onheap, the resize will be applied for the L1 only. Just one more thing came to mind now. We do have checks on cluster start that the sum of memstore size and block cache size can be max 80%. When BucketCache with onheap is used, it can take more than 80% memory!!.. We must change this checking logic also.
        Hide
        stack stack added a comment -

        Applied to master and branch-1 Thanks for reviews lads.

        Show
        stack stack added a comment - Applied to master and branch-1 Thanks for reviews lads.
        Hide
        stack stack added a comment -

        When bucketCacheIOEngineName is "heap" it is correct to calculate the memory size by mu.getMax() * bucketCachePercentage But when it is offheap, size calculation based on max heap memory looks strange no?

        Yeah. It is fallout from the way in which BUCKET_CACHE_SIZE_KEY can be either MB or a float between 0 and 1. I am reluctant to change this for 1.0. Someone may be depending on this 'behavior'. I intend to add more on BC to refguide describing options. Will include doc on this little vagary.

        Resizeable CBC would be great though I'd say resizing an offheap BC is probably low priority; the important resizing is in the heap and you have the LruBC doing that already.

        Thanks for the +1. Let me commit. The TestReplicaWithCluster is unrelated.

        Show
        stack stack added a comment - When bucketCacheIOEngineName is "heap" it is correct to calculate the memory size by mu.getMax() * bucketCachePercentage But when it is offheap, size calculation based on max heap memory looks strange no? Yeah. It is fallout from the way in which BUCKET_CACHE_SIZE_KEY can be either MB or a float between 0 and 1. I am reluctant to change this for 1.0. Someone may be depending on this 'behavior'. I intend to add more on BC to refguide describing options. Will include doc on this little vagary. Resizeable CBC would be great though I'd say resizing an offheap BC is probably low priority; the important resizing is in the heap and you have the LruBC doing that already. Thanks for the +1. Let me commit. The TestReplicaWithCluster is unrelated.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12655980/11520v3.txt
        against trunk revision .
        ATTACHMENT ID: 12655980

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.client.TestReplicaWithCluster

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12655980/11520v3.txt against trunk revision . ATTACHMENT ID: 12655980 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.client.TestReplicaWithCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10081//console This message is automatically generated.
        Hide
        anoop.hbase Anoop Sam John added a comment -

        +1. Looks great!

        The L1 LruBlockCache will be whatever hfile.block.cache.size is set to and the L2 BucketCache will be whatever hbase.bucketcache.size is set to.

        Great! I was also thinking of changing this area and making things simple.

        +    float bucketCachePercentage = c.getFloat(BUCKET_CACHE_SIZE_KEY, 0F);
        +    long bucketCacheSize = (long) (bucketCachePercentage < 1? mu.getMax() * bucketCachePercentage:
        +      bucketCachePercentage * 1024 * 1024);
        

        When bucketCacheIOEngineName is "heap" it is correct to calculate the memory size by mu.getMax() * bucketCachePercentage
        But when it is offheap, size calculation based on max heap memory looks strange no?

        Also some more work I will do later to make CombinedBlockCache as resizable. On resize the L1 can be resized. After this goes in will do.

        Show
        anoop.hbase Anoop Sam John added a comment - +1. Looks great! The L1 LruBlockCache will be whatever hfile.block.cache.size is set to and the L2 BucketCache will be whatever hbase.bucketcache.size is set to. Great! I was also thinking of changing this area and making things simple. + float bucketCachePercentage = c.getFloat(BUCKET_CACHE_SIZE_KEY, 0F); + long bucketCacheSize = ( long ) (bucketCachePercentage < 1? mu.getMax() * bucketCachePercentage: + bucketCachePercentage * 1024 * 1024); When bucketCacheIOEngineName is "heap" it is correct to calculate the memory size by mu.getMax() * bucketCachePercentage But when it is offheap, size calculation based on max heap memory looks strange no? Also some more work I will do later to make CombinedBlockCache as resizable. On resize the L1 can be resized. After this goes in will do.
        Hide
        stack stack added a comment -

        Retry.

        Show
        stack stack added a comment - Retry.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12655938/11520v3.txt
        against trunk revision .
        ATTACHMENT ID: 12655938

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait
        org.apache.hadoop.hbase.regionserver.TestHRegion
        org.apache.hadoop.hbase.client.TestReplicaWithCluster
        org.apache.hadoop.hbase.replication.TestPerTableCFReplication

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12655938/11520v3.txt against trunk revision . ATTACHMENT ID: 12655938 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait org.apache.hadoop.hbase.regionserver.TestHRegion org.apache.hadoop.hbase.client.TestReplicaWithCluster org.apache.hadoop.hbase.replication.TestPerTableCFReplication Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10076//console This message is automatically generated.
        Hide
        stack stack added a comment -

        Patch that addresses Nicks' feedback. Should be good because we had the unit tests in place to verify a couple of the deploys.

        Show
        stack stack added a comment - Patch that addresses Nicks' feedback. Should be good because we had the unit tests in place to verify a couple of the deploys.
        Hide
        stack stack added a comment -

        Nice review.

        Thank you for prompting the refactor. It was hard to grok as it was. It is a bit cleaner now (could be refactored more but will leave as is else will start to ripple afar).

        On the test, offheap plus BUCKET_CACHE_COMBINED_KEY == false is a legit set up. This is how you ask for an L1+L2 that is not 'combined'. The L2 gets registered as the victimHandler for L1. When eviction thread runs, blocks moved to L2. When we search for blocks, we'll look in both places (first in LruBC then in its victim if not present...). Given this latter, the package-info as it is in the patch is right by my reckoning. From LruBlockCache:

          public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat,
              boolean updateCacheMetrics) {
            LruCachedBlock cb = map.get(cacheKey);
            if (cb == null) {
              if (!repeat && updateCacheMetrics) stats.miss(caching);
              if (victimHandler != null) {
                return victimHandler.getBlock(cacheKey, caching, repeat, updateCacheMetrics);
              }
              return null;
            }
            if (updateCacheMetrics) stats.hit(caching);
            cb.access(count.incrementAndGet());
            return cb.getBuffer();
          }
        
        Show
        stack stack added a comment - Nice review. Thank you for prompting the refactor. It was hard to grok as it was. It is a bit cleaner now (could be refactored more but will leave as is else will start to ripple afar). On the test, offheap plus BUCKET_CACHE_COMBINED_KEY == false is a legit set up. This is how you ask for an L1+L2 that is not 'combined'. The L2 gets registered as the victimHandler for L1. When eviction thread runs, blocks moved to L2. When we search for blocks, we'll look in both places (first in LruBC then in its victim if not present...). Given this latter, the package-info as it is in the patch is right by my reckoning. From LruBlockCache: public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { LruCachedBlock cb = map.get(cacheKey); if (cb == null ) { if (!repeat && updateCacheMetrics) stats.miss(caching); if (victimHandler != null ) { return victimHandler.getBlock(cacheKey, caching, repeat, updateCacheMetrics); } return null ; } if (updateCacheMetrics) stats.hit(caching); cb.access(count.incrementAndGet()); return cb.getBuffer(); }
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12655892/11520v2.txt
        against trunk revision .
        ATTACHMENT ID: 12655892

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.master.TestMasterFailover
        org.apache.hadoop.hbase.regionserver.TestHRegion
        org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait
        org.apache.hadoop.hbase.client.TestReplicaWithCluster

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12655892/11520v2.txt against trunk revision . ATTACHMENT ID: 12655892 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.master.TestMasterFailover org.apache.hadoop.hbase.regionserver.TestHRegion org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait org.apache.hadoop.hbase.client.TestReplicaWithCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10072//console This message is automatically generated.
        Hide
        ndimiduk Nick Dimiduk added a comment -

        Oh, and in package-info, you say "we look in both places", but that's quite right, in the way that it was meant for DoubleBlockCache. CombinedBlockCache first looks in lru, then looks in BucketCache.

        Show
        ndimiduk Nick Dimiduk added a comment - Oh, and in package-info, you say "we look in both places", but that's quite right, in the way that it was meant for DoubleBlockCache. CombinedBlockCache first looks in lru, then looks in BucketCache.
        Hide
        ndimiduk Nick Dimiduk added a comment -

        v2 looks good to me. I'm still confused by CacheConfig though. There's two places where GLOBAL_BLOCK_CACHE_INSTANCE = new CombinedBlockCache(lruCache, bucketCache); This should not be the case. Can we rework the logic such that if conf.get(BUCKET_CACHE_IOENGINE_KEY, null) == null we just create the LruBlockCache instance and return?

        Additional confusion comes from the updated test where you have conf.set(CacheConfig.BUCKET_CACHE_IOENGINE_KEY, "offheap") and conf.setBoolean(CacheConfig.BUCKET_CACHE_COMBINED_KEY, false). That should be an invalid configuration, right? "I want two caches, but I want them to be independent of each other"? That does make sense. From my read, this should result in just an Lru instance, right? Isn't BUCKET_CACHE_COMBINED_KEY=true implied by specifying BUCKET_CACHE_IOENGINE_KEY?

        Maybe this is all for yet another follow-on patch? Anyway, +1 for v2.

        Show
        ndimiduk Nick Dimiduk added a comment - v2 looks good to me. I'm still confused by CacheConfig though. There's two places where GLOBAL_BLOCK_CACHE_INSTANCE = new CombinedBlockCache(lruCache, bucketCache); This should not be the case. Can we rework the logic such that if conf.get(BUCKET_CACHE_IOENGINE_KEY, null) == null we just create the LruBlockCache instance and return? Additional confusion comes from the updated test where you have conf.set(CacheConfig.BUCKET_CACHE_IOENGINE_KEY, "offheap") and conf.setBoolean(CacheConfig.BUCKET_CACHE_COMBINED_KEY, false) . That should be an invalid configuration, right? "I want two caches, but I want them to be independent of each other"? That does make sense. From my read, this should result in just an Lru instance, right? Isn't BUCKET_CACHE_COMBINED_KEY=true implied by specifying BUCKET_CACHE_IOENGINE_KEY ? Maybe this is all for yet another follow-on patch? Anyway, +1 for v2.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12655861/11520.txt
        against trunk revision .
        ATTACHMENT ID: 12655861

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.client.TestReplicaWithCluster

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12655861/11520.txt against trunk revision . ATTACHMENT ID: 12655861 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.client.TestReplicaWithCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10070//console This message is automatically generated.
        Hide
        stack stack added a comment -

        Thanks for taking a look Nick Dimiduk. v2 has tests for 'heap' and 'file' added. Or were you thinking of something else? If 'heap', yeah, could OOME if we assign too much to onheap bucketcache but that should happen on startup and be pretty plain as to what is going on.

        Chunhui Shen uses BucketCache onheap. Its way slower but heap won't fragment because of BC operations.

        Show
        stack stack added a comment - Thanks for taking a look Nick Dimiduk . v2 has tests for 'heap' and 'file' added. Or were you thinking of something else? If 'heap', yeah, could OOME if we assign too much to onheap bucketcache but that should happen on startup and be pretty plain as to what is going on. Chunhui Shen uses BucketCache onheap. Its way slower but heap won't fragment because of BC operations.
        Hide
        enis Enis Soztutar added a comment -

        Skimmed the patch. Looks good for branch-1.

        Show
        enis Enis Soztutar added a comment - Skimmed the patch. Looks good for branch-1.
        Hide
        apurtell Andrew Purtell added a comment -

        +1

        Show
        apurtell Andrew Purtell added a comment - +1
        Hide
        ndimiduk Nick Dimiduk added a comment -

        Sorry for the lag boss. This sounds like the right approach, with one caveat that configs just got harder for anyone using onheap BucketCache. I would have thought this a minority, but I think you were saying this is more common that I thought. Will give the patch a proper look this afternoon.

        Show
        ndimiduk Nick Dimiduk added a comment - Sorry for the lag boss. This sounds like the right approach, with one caveat that configs just got harder for anyone using onheap BucketCache. I would have thought this a minority, but I think you were saying this is more common that I thought. Will give the patch a proper look this afternoon.
        Hide
        stack stack added a comment -

        This patch relates to the parent in that it is a less ambitious simplification, something we could get into 1.0 (IMO). See release note for repercussions.

        Show
        stack stack added a comment - This patch relates to the parent in that it is a less ambitious simplification, something we could get into 1.0 (IMO). See release note for repercussions.
        Hide
        stack stack added a comment -

        Patch is big mostly because of doc changes and test code.

        Show
        stack stack added a comment - Patch is big mostly because of doc changes and test code.

          People

          • Assignee:
            stack stack
            Reporter:
            stack stack
          • Votes:
            0 Vote for this issue
            Watchers:
            11 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development