Details

    • Type: Improvement
    • Status: Patch Available
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 2.0.0
    • Fix Version/s: None
    • Component/s: BlockCache
    • Labels:
      None

      Description

      LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and recency of the working set. It achieves concurrency by using an O( n ) background thread to prioritize the entries and evict. Accessing an entry is O(1) by a hash table lookup, recording its logical access time, and setting a frequency flag. A write is performed in O(1) time by updating the hash table and triggering an async eviction thread. This provides ideal concurrency and minimizes the latencies by penalizing the thread instead of the caller. However the policy does not age the frequencies and may not be resilient to various workload patterns.

      W-TinyLFU (research paper) records the frequency in a counting sketch, ages periodically by halving the counters, and orders entries by SLRU. An entry is discarded by comparing the frequency of the new arrival (candidate) to the SLRU's victim, and keeping the one with the highest frequency. This allows the operations to be performed in O(1) time and, though the use of a compact sketch, a much larger history is retained beyond the current working set. In a variety of real world traces the policy had near optimal hit rates.

      Concurrency is achieved by buffering and replaying the operations, similar to a write-ahead log. A read is recorded into a striped ring buffer and writes to a queue. The operations are applied in batches under a try-lock by an asynchronous thread, thereby track the usage pattern without incurring high latencies (benchmarks).

      In YCSB benchmarks the results were inconclusive. For a large cache (99% hit rates) the two caches have near identical throughput and latencies with LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 1-4% hit rate improvement and therefore lower latencies. The lack luster result is because a synthetic Zipfian distribution is used, which SLRU performs optimally. In a more varied, real-world workload we'd expect to see improvements by being able to make smarter predictions.

      The provided patch implements BlockCache using the Caffeine caching library (see HighScalability article).

      Edward Bortnikov and Eshcar Hillel have graciously provided guidance for evaluating this patch (github branch).

      1. bc.hit.count
        21 kB
        stack
      2. bc.miss.count
        25 kB
        stack
      3. branch-1.tinylfu.txt
        100 kB
        stack
      4. gets
        19 kB
        stack
      5. HBASE-15560.patch
        57 kB
        Ben Manes
      6. HBASE-15560.patch
        66 kB
        Ben Manes
      7. HBASE-15560.patch
        66 kB
        Ben Manes
      8. HBASE-15560.patch
        66 kB
        Ben Manes
      9. HBASE-15560.patch
        65 kB
        Ben Manes
      10. HBASE-15560.patch
        64 kB
        Ben Manes
      11. HBASE-15560.patch
        33 kB
        Ben Manes
      12. run_ycsb_c.sh
        6 kB
        stack
      13. run_ycsb_loading.sh
        3 kB
        stack
      14. tinylfu.patch
        33 kB
        Ben Manes

        Issue Links

          Activity

          Hide
          stack stack added a comment -

          Patch looks good.

          Not too interested in giving folks a choice for L1. Would rather just switch over if can demonstrate it generally better running Caffeine; theory-wise it would make sense. Why you think LRUBlockCache did better in 100% case Ben Manes? Caffeine is doing more work? If the two implementations are close when all comes from cache and Caffeine does better when there are misses, I'd say it no brainer. Let me do my own compare...

          Anyone else want to chime in here? Mikhail Antonov?

          Show
          stack stack added a comment - Patch looks good. Not too interested in giving folks a choice for L1. Would rather just switch over if can demonstrate it generally better running Caffeine; theory-wise it would make sense. Why you think LRUBlockCache did better in 100% case Ben Manes ? Caffeine is doing more work? If the two implementations are close when all comes from cache and Caffeine does better when there are misses, I'd say it no brainer. Let me do my own compare... Anyone else want to chime in here? Mikhail Antonov ?
          Hide
          ben.manes Ben Manes added a comment -

          Hi Michael,

          I agree that offer a choice is an unnecessary confusion. It made it easier to test via a flag and, if retained, could be used only transitionally to give users a release cycle to adjust.

          For the 100% case LruBlockCache's work is a hash table read and increment of a global counter (fetch-and-add instruction). Caffeine's is the hash table read, select a ring buffer, CAS append into it, and schedule an async drain if full on ForkJoinPool#commonPool() (which might cause a few more context switches). The differences were in a margin of error in LruBlockCache's favor, but a tiny penalty seems understandable.

          Show
          ben.manes Ben Manes added a comment - Hi Michael, I agree that offer a choice is an unnecessary confusion. It made it easier to test via a flag and, if retained, could be used only transitionally to give users a release cycle to adjust. For the 100% case LruBlockCache's work is a hash table read and increment of a global counter (fetch-and-add instruction). Caffeine's is the hash table read, select a ring buffer, CAS append into it, and schedule an async drain if full on ForkJoinPool#commonPool() (which might cause a few more context switches). The differences were in a margin of error in LruBlockCache's favor, but a tiny penalty seems understandable.
          Hide
          busbey Sean Busbey added a comment -

          As noted in ACCUMULO-4177, this will require Java 8. Should we start the dev@ discussion now about if HBase 2.0 will be JDK8+ only?

          Show
          busbey Sean Busbey added a comment - As noted in ACCUMULO-4177 , this will require Java 8. Should we start the dev@ discussion now about if HBase 2.0 will be JDK8+ only?
          Hide
          stack stack added a comment -

          Yes (didn't realize it jdk8 only)... I still owe basic perf numbers here.

          Show
          stack stack added a comment - Yes (didn't realize it jdk8 only)... I still owe basic perf numbers here.
          Hide
          apurtell Andrew Purtell added a comment -

          I don't see a problem making this pronouncement, if and when we take on something 8-only. We're past the end of public updates of Java 7.

          Show
          apurtell Andrew Purtell added a comment - I don't see a problem making this pronouncement, if and when we take on something 8-only. We're past the end of public updates of Java 7.
          Hide
          ben.manes Ben Manes added a comment -

          Please let me know if there is anything I can do on my end to help.

          Show
          ben.manes Ben Manes added a comment - Please let me know if there is anything I can do on my end to help.
          Hide
          stack stack added a comment -

          Ball is in our court Ben Manes I just need to do some basic tests. We got clearance already for master being jdk8 so that removed the only obstacle other than a bit of basic testing.

          Show
          stack stack added a comment - Ball is in our court Ben Manes I just need to do some basic tests. We got clearance already for master being jdk8 so that removed the only obstacle other than a bit of basic testing.
          Hide
          ben.manes Ben Manes added a comment -

          Druid was recently struck by JDK-8078490 - Missed submissions in ForkJoinPool. This caused the cache to stop evicting because the asynchronous task was never run due to a race in the executor. The result was either a memory leak (2.2.6) or halting due to the back pressure (2.3.0). The solution if you are running on an older JDK8 release is to use a different executor (e.g. same-thread). This critical bug effected 8u40 - 8u60 (current is 8u92) and broke any FJP usage, such as CompletableFuture. I confirmed this fix with Doug Lea when investigating.

          In other news, Cassandra recently adopted Caffeine for its [page cache](https://issues.apache.org/jira/browse/CASSANDRA-5863). Their analysis of the performance, hit rate, and scan tolerance were positive. I'm hoping to integrate the eviction policy into their off-heap cache (OHC), which uses LRU. That cache is extracted into a library so contributing there might make it easy for you to benefit as well.

          Show
          ben.manes Ben Manes added a comment - Druid was recently struck by JDK-8078490 - Missed submissions in ForkJoinPool . This caused the cache to stop evicting because the asynchronous task was never run due to a race in the executor. The result was either a memory leak (2.2.6) or halting due to the back pressure (2.3.0). The solution if you are running on an older JDK8 release is to use a different executor (e.g. same-thread). This critical bug effected 8u40 - 8u60 (current is 8u92) and broke any FJP usage, such as CompletableFuture . I confirmed this fix with Doug Lea when investigating. In other news, Cassandra recently adopted Caffeine for its [page cache] ( https://issues.apache.org/jira/browse/CASSANDRA-5863 ). Their analysis of the performance, hit rate, and scan tolerance were positive. I'm hoping to integrate the eviction policy into their off-heap cache ( OHC ), which uses LRU. That cache is extracted into a library so contributing there might make it easy for you to benefit as well.
          Hide
          ben.manes Ben Manes added a comment -

          Can we merge this in?

          Show
          ben.manes Ben Manes added a comment - Can we merge this in?
          Hide
          busbey Sean Busbey added a comment -

          I believe we've got consensus on the master / HBase 2.0.z branch being jdk8-only, but we're blocked on implementing it over on HBASE-15624.

          Show
          busbey Sean Busbey added a comment - I believe we've got consensus on the master / HBase 2.0.z branch being jdk8-only, but we're blocked on implementing it over on HBASE-15624 .
          Hide
          ben.manes Ben Manes added a comment -

          Thanks! I'll track that one then.

          Show
          ben.manes Ben Manes added a comment - Thanks! I'll track that one then.
          Hide
          Apache9 Duo Zhang added a comment -

          Now master is moved to jdk8 only. Let's resume the progress here? Seems a big performance improvement.

          Show
          Apache9 Duo Zhang added a comment - Now master is moved to jdk8 only. Let's resume the progress here? Seems a big performance improvement.
          Hide
          ben.manes Ben Manes added a comment -

          Rebased and upgraded to Caffeine 2.3.3

          Show
          ben.manes Ben Manes added a comment - Rebased and upgraded to Caffeine 2.3.3
          Hide
          ben.manes Ben Manes added a comment -

          Thanks Duo Zhang. The rebase had no conflicts and I didn't notice any major changes in the caching area since the previous patch.

          Show
          ben.manes Ben Manes added a comment - Thanks Duo Zhang . The rebase had no conflicts and I didn't notice any major changes in the caching area since the previous patch.
          Hide
          ben.manes Ben Manes added a comment -

          Patch submitted to review board.

          Show
          ben.manes Ben Manes added a comment - Patch submitted to review board.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 21s Docker mode activated.
          -1 @author 0m 0s The patch appears to contain 1 @author tags which the community has agreed to not allow in code contributions.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 8s Maven dependency ordering for branch
          +1 mvninstall 2m 41s master passed
          +1 compile 3m 6s master passed
          +1 checkstyle 0m 54s master passed
          +1 mvneclipse 1m 25s master passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: .
          -1 findbugs 0m 31s hbase-common in master has 1 extant Findbugs warnings.
          +1 javadoc 2m 48s master passed
          0 mvndep 0m 10s Maven dependency ordering for patch
          -1 mvninstall 0m 26s hbase-server in the patch failed.
          -1 mvninstall 1m 25s root in the patch failed.
          -1 compile 0m 29s hbase-server in the patch failed.
          -1 compile 0m 55s root in the patch failed.
          -1 javac 0m 29s hbase-server in the patch failed.
          -1 javac 0m 55s root in the patch failed.
          +1 checkstyle 0m 59s the patch passed
          -1 mvneclipse 1m 2s root in the patch failed.
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 3s The patch has no ill-formed XML file.
          -1 hadoopcheck 1m 6s The patch causes 22 errors with Hadoop v2.4.0.
          -1 hadoopcheck 2m 8s The patch causes 22 errors with Hadoop v2.4.1.
          -1 hadoopcheck 3m 10s The patch causes 22 errors with Hadoop v2.5.0.
          -1 hadoopcheck 4m 12s The patch causes 22 errors with Hadoop v2.5.1.
          -1 hadoopcheck 5m 16s The patch causes 22 errors with Hadoop v2.5.2.
          -1 hadoopcheck 6m 16s The patch causes 22 errors with Hadoop v2.6.1.
          -1 hadoopcheck 7m 16s The patch causes 22 errors with Hadoop v2.6.2.
          -1 hadoopcheck 8m 12s The patch causes 22 errors with Hadoop v2.6.3.
          -1 hadoopcheck 9m 13s The patch causes 22 errors with Hadoop v2.7.1.
          -1 hbaseprotoc 0m 22s hbase-server in the patch failed.
          -1 hbaseprotoc 0m 42s root in the patch failed.
          0 findbugs 0m 0s Skipped patched modules with no Java source: .
          -1 findbugs 0m 18s hbase-server in the patch failed.
          +1 javadoc 2m 47s the patch passed
          +1 unit 1m 46s hbase-common in the patch passed.
          -1 unit 0m 26s hbase-server in the patch failed.
          -1 unit 5m 40s root in the patch failed.
          +1 asflicense 0m 24s The patch does not generate ASF License warnings.
          43m 8s



          Subsystem Report/Notes
          Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12828069/HBASE-15560.patch
          JIRA Issue HBASE-15560
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile xml
          uname Linux 151fd497472e 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / 8c4b09d
          Default Java 1.8.0_101
          @author https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/author-tags.txt
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html
          mvninstall https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-mvninstall-hbase-server.txt
          mvninstall https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-mvninstall-root.txt
          compile https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-compile-hbase-server.txt
          compile https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-compile-hbase-server.txt
          javac https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-compile-root.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-mvneclipse-root.txt
          hbaseprotoc https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-hbaseprotoc-hbase-server.txt
          hbaseprotoc https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-hbaseprotoc-root.txt
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-findbugs-hbase-server.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-unit-hbase-server.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3565/testReport/
          modules C: hbase-common hbase-server . U: .
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3565/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 21s Docker mode activated. -1 @author 0m 0s The patch appears to contain 1 @author tags which the community has agreed to not allow in code contributions. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 8s Maven dependency ordering for branch +1 mvninstall 2m 41s master passed +1 compile 3m 6s master passed +1 checkstyle 0m 54s master passed +1 mvneclipse 1m 25s master passed 0 findbugs 0m 0s Skipped patched modules with no Java source: . -1 findbugs 0m 31s hbase-common in master has 1 extant Findbugs warnings. +1 javadoc 2m 48s master passed 0 mvndep 0m 10s Maven dependency ordering for patch -1 mvninstall 0m 26s hbase-server in the patch failed. -1 mvninstall 1m 25s root in the patch failed. -1 compile 0m 29s hbase-server in the patch failed. -1 compile 0m 55s root in the patch failed. -1 javac 0m 29s hbase-server in the patch failed. -1 javac 0m 55s root in the patch failed. +1 checkstyle 0m 59s the patch passed -1 mvneclipse 1m 2s root in the patch failed. +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 3s The patch has no ill-formed XML file. -1 hadoopcheck 1m 6s The patch causes 22 errors with Hadoop v2.4.0. -1 hadoopcheck 2m 8s The patch causes 22 errors with Hadoop v2.4.1. -1 hadoopcheck 3m 10s The patch causes 22 errors with Hadoop v2.5.0. -1 hadoopcheck 4m 12s The patch causes 22 errors with Hadoop v2.5.1. -1 hadoopcheck 5m 16s The patch causes 22 errors with Hadoop v2.5.2. -1 hadoopcheck 6m 16s The patch causes 22 errors with Hadoop v2.6.1. -1 hadoopcheck 7m 16s The patch causes 22 errors with Hadoop v2.6.2. -1 hadoopcheck 8m 12s The patch causes 22 errors with Hadoop v2.6.3. -1 hadoopcheck 9m 13s The patch causes 22 errors with Hadoop v2.7.1. -1 hbaseprotoc 0m 22s hbase-server in the patch failed. -1 hbaseprotoc 0m 42s root in the patch failed. 0 findbugs 0m 0s Skipped patched modules with no Java source: . -1 findbugs 0m 18s hbase-server in the patch failed. +1 javadoc 2m 47s the patch passed +1 unit 1m 46s hbase-common in the patch passed. -1 unit 0m 26s hbase-server in the patch failed. -1 unit 5m 40s root in the patch failed. +1 asflicense 0m 24s The patch does not generate ASF License warnings. 43m 8s Subsystem Report/Notes Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12828069/HBASE-15560.patch JIRA Issue HBASE-15560 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile xml uname Linux 151fd497472e 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / 8c4b09d Default Java 1.8.0_101 @author https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/author-tags.txt findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html mvninstall https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-mvninstall-hbase-server.txt mvninstall https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-mvninstall-root.txt compile https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-compile-hbase-server.txt compile https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-compile-hbase-server.txt javac https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-compile-root.txt mvneclipse https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-mvneclipse-root.txt hbaseprotoc https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-hbaseprotoc-hbase-server.txt hbaseprotoc https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-hbaseprotoc-root.txt findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-findbugs-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-unit-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/3565/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3565/testReport/ modules C: hbase-common hbase-server . U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3565/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          eshcar Eshcar Hillel added a comment -

          Hi Benjamin Manns
          Left some comments in RB.
          It seems TinyLFU will ignore the inMemory flag. Is this correct?
          It might be ok given that the caching policy is more sophisticated than LRU, but can you be explicit about this issue?

          Also, given that in the patch LRU is the default policy seems no new tests were added in to test TinyLFU.

          Show
          eshcar Eshcar Hillel added a comment - Hi Benjamin Manns Left some comments in RB. It seems TinyLFU will ignore the inMemory flag. Is this correct? It might be ok given that the caching policy is more sophisticated than LRU, but can you be explicit about this issue? Also, given that in the patch LRU is the default policy seems no new tests were added in to test TinyLFU.
          Hide
          ben.manes Ben Manes added a comment -

          Thanks for the reviews so far. When I originally wrote the patch I did enough for evaluation and tests seemed premature. I'll see what can be ported over from TestLruBlockCache, though I'd expect it to be minimal. I don't think it would be appropriate to write tests in HBase that make too many assumptions about the policy's behavior as that leads to relying on implementation details of an external library.

          BlockPriority is coupled to LruBlockCache and arguably it is a leaky abstraction by exposing it. I'd argue that the `BlockCache` should be redesigned to avoid dogpiling by computing through the cache and minimize implementation assumptions. `MemcachedBlockCache` silently ignores the flag, throws exceptions, and returns dummy values for methods that make no sense for it.

          If this is merged in, I'd like to see a similar ticket like ACCUMULO-4466 to evaluate whether to make TinyLFU the default. The Lru implementation could then be removed.

          Show
          ben.manes Ben Manes added a comment - Thanks for the reviews so far. When I originally wrote the patch I did enough for evaluation and tests seemed premature. I'll see what can be ported over from TestLruBlockCache, though I'd expect it to be minimal. I don't think it would be appropriate to write tests in HBase that make too many assumptions about the policy's behavior as that leads to relying on implementation details of an external library. BlockPriority is coupled to LruBlockCache and arguably it is a leaky abstraction by exposing it. I'd argue that the `BlockCache` should be redesigned to avoid dogpiling by computing through the cache and minimize implementation assumptions. `MemcachedBlockCache` silently ignores the flag, throws exceptions, and returns dummy values for methods that make no sense for it. If this is merged in, I'd like to see a similar ticket like ACCUMULO-4466 to evaluate whether to make TinyLFU the default. The Lru implementation could then be removed.
          Hide
          ben.manes Ben Manes added a comment -

          I think that I addressed all of the comments, except where noted as unclear. Please take another look when you have a chance.

          Show
          ben.manes Ben Manes added a comment - I think that I addressed all of the comments, except where noted as unclear. Please take another look when you have a chance.
          Hide
          eshcar Eshcar Hillel added a comment -

          Ben Manes seems you addressed all comments in RB.
          Can you upload new patch which passes QA? Currently you have some compilation errors.
          You can take a look at the Report table under the unit lines.

          Show
          eshcar Eshcar Hillel added a comment - Ben Manes seems you addressed all comments in RB. Can you upload new patch which passes QA? Currently you have some compilation errors. You can take a look at the Report table under the unit lines.
          Hide
          ben.manes Ben Manes added a comment -

          The error appears to be a pre-commit check that fails for CaffeinatedBlockCache formatting. However, I renamed that to TinyLfuBlockCache, so either the report is old or the build is retaining stale files. Locally a compile is successful.

          Show
          ben.manes Ben Manes added a comment - The error appears to be a pre-commit check that fails for CaffeinatedBlockCache formatting. However, I renamed that to TinyLfuBlockCache, so either the report is old or the build is retaining stale files. Locally a compile is successful.
          Hide
          ben.manes Ben Manes added a comment -

          Attached patch from review board

          Show
          ben.manes Ben Manes added a comment - Attached patch from review board
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 53m 28s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 5 new or modified test files.
          0 mvndep 1m 54s Maven dependency ordering for branch
          +1 mvninstall 8m 49s master passed
          +1 compile 5m 0s master passed
          +1 checkstyle 1m 18s master passed
          +1 mvneclipse 2m 0s master passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: .
          -1 findbugs 0m 42s hbase-common in master has 1 extant Findbugs warnings.
          +1 javadoc 4m 13s master passed
          0 mvndep 0m 12s Maven dependency ordering for patch
          -1 mvninstall 4m 14s root in the patch failed.
          -1 compile 3m 4s root in the patch failed.
          -1 javac 3m 4s root in the patch failed.
          +1 checkstyle 1m 7s the patch passed
          -1 mvneclipse 1m 4s root in the patch failed.
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 4s The patch has no ill-formed XML file.
          -1 hadoopcheck 5m 12s The patch causes 11 errors with Hadoop v2.4.0.
          -1 hadoopcheck 9m 7s The patch causes 11 errors with Hadoop v2.4.1.
          -1 hadoopcheck 13m 7s The patch causes 11 errors with Hadoop v2.5.0.
          -1 hadoopcheck 17m 7s The patch causes 11 errors with Hadoop v2.5.1.
          -1 hadoopcheck 21m 7s The patch causes 11 errors with Hadoop v2.5.2.
          -1 hadoopcheck 25m 17s The patch causes 11 errors with Hadoop v2.6.1.
          -1 hadoopcheck 29m 25s The patch causes 11 errors with Hadoop v2.6.2.
          -1 hadoopcheck 33m 29s The patch causes 11 errors with Hadoop v2.6.3.
          -1 hadoopcheck 37m 38s The patch causes 11 errors with Hadoop v2.7.1.
          -1 hbaseprotoc 2m 10s root in the patch failed.
          0 findbugs 0m 0s Skipped patched modules with no Java source: .
          +1 findbugs 2m 49s the patch passed
          -1 javadoc 0m 47s hbase-server generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1)
          -1 javadoc 2m 40s root generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20)
          +1 unit 2m 10s hbase-common in the patch passed.
          -1 unit 23m 25s hbase-server in the patch failed.
          -1 unit 30m 54s root in the patch failed.
          +1 asflicense 0m 58s The patch does not generate ASF License warnings.
          198m 24s



          Reason Tests
          Failed junit tests hadoop.hbase.filter.TestFilter
            hadoop.hbase.filter.TestFilter



          Subsystem Report/Notes
          Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12830417/HBASE-15560.patch
          JIRA Issue HBASE-15560
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile xml
          uname Linux dcdd7cd61591 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / b9ec59e
          Default Java 1.8.0_101
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html
          mvninstall https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-mvninstall-root.txt
          compile https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-compile-root.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-mvneclipse-root.txt
          hbaseprotoc https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-hbaseprotoc-root.txt
          javadoc https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/diff-javadoc-javadoc-hbase-server.txt
          javadoc https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/diff-javadoc-javadoc-root.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-unit-hbase-server.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-unit-root.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3719/testReport/
          modules C: hbase-common hbase-server . U: .
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3719/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 53m 28s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 5 new or modified test files. 0 mvndep 1m 54s Maven dependency ordering for branch +1 mvninstall 8m 49s master passed +1 compile 5m 0s master passed +1 checkstyle 1m 18s master passed +1 mvneclipse 2m 0s master passed 0 findbugs 0m 0s Skipped patched modules with no Java source: . -1 findbugs 0m 42s hbase-common in master has 1 extant Findbugs warnings. +1 javadoc 4m 13s master passed 0 mvndep 0m 12s Maven dependency ordering for patch -1 mvninstall 4m 14s root in the patch failed. -1 compile 3m 4s root in the patch failed. -1 javac 3m 4s root in the patch failed. +1 checkstyle 1m 7s the patch passed -1 mvneclipse 1m 4s root in the patch failed. +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 4s The patch has no ill-formed XML file. -1 hadoopcheck 5m 12s The patch causes 11 errors with Hadoop v2.4.0. -1 hadoopcheck 9m 7s The patch causes 11 errors with Hadoop v2.4.1. -1 hadoopcheck 13m 7s The patch causes 11 errors with Hadoop v2.5.0. -1 hadoopcheck 17m 7s The patch causes 11 errors with Hadoop v2.5.1. -1 hadoopcheck 21m 7s The patch causes 11 errors with Hadoop v2.5.2. -1 hadoopcheck 25m 17s The patch causes 11 errors with Hadoop v2.6.1. -1 hadoopcheck 29m 25s The patch causes 11 errors with Hadoop v2.6.2. -1 hadoopcheck 33m 29s The patch causes 11 errors with Hadoop v2.6.3. -1 hadoopcheck 37m 38s The patch causes 11 errors with Hadoop v2.7.1. -1 hbaseprotoc 2m 10s root in the patch failed. 0 findbugs 0m 0s Skipped patched modules with no Java source: . +1 findbugs 2m 49s the patch passed -1 javadoc 0m 47s hbase-server generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) -1 javadoc 2m 40s root generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20) +1 unit 2m 10s hbase-common in the patch passed. -1 unit 23m 25s hbase-server in the patch failed. -1 unit 30m 54s root in the patch failed. +1 asflicense 0m 58s The patch does not generate ASF License warnings. 198m 24s Reason Tests Failed junit tests hadoop.hbase.filter.TestFilter   hadoop.hbase.filter.TestFilter Subsystem Report/Notes Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12830417/HBASE-15560.patch JIRA Issue HBASE-15560 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile xml uname Linux dcdd7cd61591 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / b9ec59e Default Java 1.8.0_101 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html mvninstall https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-mvninstall-root.txt compile https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-compile-root.txt mvneclipse https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-mvneclipse-root.txt hbaseprotoc https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-hbaseprotoc-root.txt javadoc https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/diff-javadoc-javadoc-hbase-server.txt javadoc https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/diff-javadoc-javadoc-root.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-unit-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-unit-root.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/3719/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3719/testReport/ modules C: hbase-common hbase-server . U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3719/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          ben.manes Ben Manes added a comment -

          Eshcar Hillel all of the issues don't appear to be related to my changes. Do you know if there is anything I can do about it?

          Show
          ben.manes Ben Manes added a comment - Eshcar Hillel all of the issues don't appear to be related to my changes. Do you know if there is anything I can do about it?
          Hide
          busbey Sean Busbey added a comment -

          the patch failures are because the caffeine library improperly refers to the ALv2:

          <licenses>
              <license>
                <name>The Apache Software License, Version 2.0</name>
                <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
                <distribution>repo</distribution>
              </license>
          </licenses>
          

          The correct name is "Apache License, Version 2.0". You should update the supplemental license information; there are a bunch of examples there from ASF projects that used the wrong name for years. the file is hbase-resource-bundle/src/main/resources/supplemental-models.xml

          Show
          busbey Sean Busbey added a comment - the patch failures are because the caffeine library improperly refers to the ALv2: <licenses> <license> <name>The Apache Software License, Version 2.0</name> <url>http: //www.apache.org/licenses/LICENSE-2.0.txt</url> <distribution>repo</distribution> </license> </licenses> The correct name is "Apache License, Version 2.0". You should update the supplemental license information; there are a bunch of examples there from ASF projects that used the wrong name for years. the file is hbase-resource-bundle/src/main/resources/supplemental-models.xml
          Hide
          ben.manes Ben Manes added a comment -

          Thanks Sean Busbey! I made the update and will fix the definition in my build.

          Show
          ben.manes Ben Manes added a comment - Thanks Sean Busbey ! I made the update and will fix the definition in my build.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 5 new or modified test files.
          0 mvndep 0m 26s Maven dependency ordering for branch
          +1 mvninstall 3m 36s master passed
          +1 compile 3m 46s master passed
          +1 checkstyle 0m 32s master passed
          +1 mvneclipse 1m 43s master passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle .
          -1 findbugs 0m 39s hbase-common in master has 1 extant Findbugs warnings.
          +1 javadoc 2m 47s master passed
          0 mvndep 0m 9s Maven dependency ordering for patch
          +1 mvninstall 4m 19s the patch passed
          +1 compile 3m 44s the patch passed
          +1 javac 3m 44s the patch passed
          +1 checkstyle 0m 28s the patch passed
          +1 mvneclipse 1m 39s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 5s The patch has no ill-formed XML file.
          +1 hadoopcheck 28m 21s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1.
          +1 hbaseprotoc 1m 39s the patch passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle .
          +1 findbugs 2m 36s the patch passed
          -1 javadoc 0m 27s hbase-server generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1)
          -1 javadoc 1m 42s root generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20)
          +1 unit 0m 6s hbase-resource-bundle in the patch passed.
          +1 unit 1m 44s hbase-common in the patch passed.
          -1 unit 14m 26s hbase-server in the patch failed.
          -1 unit 19m 38s root in the patch failed.
          +1 asflicense 0m 37s The patch does not generate ASF License warnings.
          98m 22s



          Reason Tests
          Failed junit tests hadoop.hbase.filter.TestFilter
            hadoop.hbase.filter.TestFilter



          Subsystem Report/Notes
          Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12830446/HBASE-15560.patch
          JIRA Issue HBASE-15560
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile xml
          uname Linux 53b0e71253c7 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / db394f5
          Default Java 1.8.0_101
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html
          javadoc https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/diff-javadoc-javadoc-hbase-server.txt
          javadoc https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/diff-javadoc-javadoc-root.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/patch-unit-hbase-server.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/patch-unit-root.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3726/testReport/
          modules C: hbase-resource-bundle hbase-common hbase-server . U: .
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3726/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 5 new or modified test files. 0 mvndep 0m 26s Maven dependency ordering for branch +1 mvninstall 3m 36s master passed +1 compile 3m 46s master passed +1 checkstyle 0m 32s master passed +1 mvneclipse 1m 43s master passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle . -1 findbugs 0m 39s hbase-common in master has 1 extant Findbugs warnings. +1 javadoc 2m 47s master passed 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 4m 19s the patch passed +1 compile 3m 44s the patch passed +1 javac 3m 44s the patch passed +1 checkstyle 0m 28s the patch passed +1 mvneclipse 1m 39s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 5s The patch has no ill-formed XML file. +1 hadoopcheck 28m 21s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. +1 hbaseprotoc 1m 39s the patch passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle . +1 findbugs 2m 36s the patch passed -1 javadoc 0m 27s hbase-server generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) -1 javadoc 1m 42s root generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20) +1 unit 0m 6s hbase-resource-bundle in the patch passed. +1 unit 1m 44s hbase-common in the patch passed. -1 unit 14m 26s hbase-server in the patch failed. -1 unit 19m 38s root in the patch failed. +1 asflicense 0m 37s The patch does not generate ASF License warnings. 98m 22s Reason Tests Failed junit tests hadoop.hbase.filter.TestFilter   hadoop.hbase.filter.TestFilter Subsystem Report/Notes Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12830446/HBASE-15560.patch JIRA Issue HBASE-15560 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile xml uname Linux 53b0e71253c7 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / db394f5 Default Java 1.8.0_101 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html javadoc https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/diff-javadoc-javadoc-hbase-server.txt javadoc https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/diff-javadoc-javadoc-root.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/patch-unit-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/patch-unit-root.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/3726/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3726/testReport/ modules C: hbase-resource-bundle hbase-common hbase-server . U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3726/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          busbey Sean Busbey added a comment -

          new javadoc warning looks legit. the unit test failures are the same in the last two runs, which

          could you take a look Ben Manes?

          Show
          busbey Sean Busbey added a comment - new javadoc warning looks legit. the unit test failures are the same in the last two runs, which could you take a look Ben Manes ?
          Hide
          ben.manes Ben Manes added a comment -

          Fixed JavaDoc.

          I am unable to reproduce the test failure locally when I run,
          $ mvn test -Dtest=org.apache.hadoop.hbase.filter.TestFilter

          Show
          ben.manes Ben Manes added a comment - Fixed JavaDoc. I am unable to reproduce the test failure locally when I run, $ mvn test -Dtest=org.apache.hadoop.hbase.filter.TestFilter
          Hide
          ben.manes Ben Manes added a comment -

          Cancelling and re-submitting patch, hoping that Hadoop QA will pick it up this time.

          Show
          ben.manes Ben Manes added a comment - Cancelling and re-submitting patch, hoping that Hadoop QA will pick it up this time.
          Hide
          ben.manes Ben Manes added a comment -

          Sean Busbey, I can't seem to trigger a new build. Can you please take a look?

          Show
          ben.manes Ben Manes added a comment - Sean Busbey , I can't seem to trigger a new build. Can you please take a look?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 5 new or modified test files.
          0 mvndep 0m 17s Maven dependency ordering for branch
          +1 mvninstall 2m 54s master passed
          +1 compile 3m 22s master passed
          +1 checkstyle 0m 28s master passed
          +1 mvneclipse 1m 35s master passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle .
          -1 findbugs 0m 34s hbase-common in master has 1 extant Findbugs warnings.
          +1 javadoc 2m 29s master passed
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 3m 59s the patch passed
          +1 compile 3m 18s the patch passed
          +1 javac 3m 18s the patch passed
          +1 checkstyle 0m 28s the patch passed
          +1 mvneclipse 1m 35s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 5s The patch has no ill-formed XML file.
          +1 hadoopcheck 25m 43s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1.
          +1 hbaseprotoc 1m 37s the patch passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle .
          +1 findbugs 2m 39s the patch passed
          +1 javadoc 2m 39s the patch passed
          +1 unit 0m 6s hbase-resource-bundle in the patch passed.
          +1 unit 1m 43s hbase-common in the patch passed.
          -1 unit 81m 20s hbase-server in the patch failed.
          -1 unit 92m 57s root in the patch failed.
          +1 asflicense 0m 51s The patch does not generate ASF License warnings.
          233m 18s



          Reason Tests
          Timed out junit tests org.apache.hadoop.hbase.client.TestFromClientSide
            org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
            org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
            org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
            org.apache.hadoop.hbase.client.TestAdmin2
            org.apache.hadoop.hbase.client.TestHCM
            org.apache.hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas



          Subsystem Report/Notes
          Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12830624/HBASE-15560.patch
          JIRA Issue HBASE-15560
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile xml
          uname Linux 4eb6ba4f664d 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / de7316b
          Default Java 1.8.0_101
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3748/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3748/artifact/patchprocess/patch-unit-hbase-server.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3748/artifact/patchprocess/patch-unit-root.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/3748/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/3748/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3748/testReport/
          modules C: hbase-resource-bundle hbase-common hbase-server . U: .
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3748/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 5 new or modified test files. 0 mvndep 0m 17s Maven dependency ordering for branch +1 mvninstall 2m 54s master passed +1 compile 3m 22s master passed +1 checkstyle 0m 28s master passed +1 mvneclipse 1m 35s master passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle . -1 findbugs 0m 34s hbase-common in master has 1 extant Findbugs warnings. +1 javadoc 2m 29s master passed 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 3m 59s the patch passed +1 compile 3m 18s the patch passed +1 javac 3m 18s the patch passed +1 checkstyle 0m 28s the patch passed +1 mvneclipse 1m 35s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 5s The patch has no ill-formed XML file. +1 hadoopcheck 25m 43s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. +1 hbaseprotoc 1m 37s the patch passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle . +1 findbugs 2m 39s the patch passed +1 javadoc 2m 39s the patch passed +1 unit 0m 6s hbase-resource-bundle in the patch passed. +1 unit 1m 43s hbase-common in the patch passed. -1 unit 81m 20s hbase-server in the patch failed. -1 unit 92m 57s root in the patch failed. +1 asflicense 0m 51s The patch does not generate ASF License warnings. 233m 18s Reason Tests Timed out junit tests org.apache.hadoop.hbase.client.TestFromClientSide   org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas   org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient   org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence   org.apache.hadoop.hbase.client.TestAdmin2   org.apache.hadoop.hbase.client.TestHCM   org.apache.hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas Subsystem Report/Notes Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12830624/HBASE-15560.patch JIRA Issue HBASE-15560 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile xml uname Linux 4eb6ba4f664d 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / de7316b Default Java 1.8.0_101 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3748/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/3748/artifact/patchprocess/patch-unit-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/3748/artifact/patchprocess/patch-unit-root.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/3748/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/3748/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3748/testReport/ modules C: hbase-resource-bundle hbase-common hbase-server . U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3748/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          ben.manes Ben Manes added a comment -

          Sean Busbey, Eshcar Hillel: The remaining issues appear unrelated to my changes.

          Show
          ben.manes Ben Manes added a comment - Sean Busbey , Eshcar Hillel : The remaining issues appear unrelated to my changes.
          Hide
          eshcar Eshcar Hillel added a comment -

          Hi
          This often happened to me when tests failed in QA and passed locally. Usually after some rebase this is resolved.
          I am not a committer, and I'm not sure what's the policy for committing patches which didn't pass QA, however I can give my
          +1
          Stack, Sean Busbey, any input here?

          Show
          eshcar Eshcar Hillel added a comment - Hi This often happened to me when tests failed in QA and passed locally. Usually after some rebase this is resolved. I am not a committer, and I'm not sure what's the policy for committing patches which didn't pass QA, however I can give my +1 Stack , Sean Busbey , any input here?
          Hide
          yuzhihong@gmail.com Ted Yu added a comment -

          Looks good overall.

          +  public static final String HFILE_BLOCK_CACHE_POLICY_KEY =
          +      "hfile.block.cache.policy";
          +
          +  public static final String HFILE_BLOCK_CACHE_POLICY_DEFAULT = "LRU";
          

          The above is used in CacheConfig only. Please move them there.

          +        // FIXME: Currently does not capture the insertion time
          +        stats.evicted(/* cachedTime */ 0L, key.isPrimary());
          

          Would the above be done in a follow-up ?

          Show
          yuzhihong@gmail.com Ted Yu added a comment - Looks good overall. + public static final String HFILE_BLOCK_CACHE_POLICY_KEY = + "hfile.block.cache.policy" ; + + public static final String HFILE_BLOCK_CACHE_POLICY_DEFAULT = "LRU" ; The above is used in CacheConfig only. Please move them there. + // FIXME: Currently does not capture the insertion time + stats.evicted(/* cachedTime */ 0L, key.isPrimary()); Would the above be done in a follow-up ?
          Hide
          anoop.hbase Anoop Sam John added a comment -

          Doing another pass over the patch. Have some comments. Will complete today.

          Show
          anoop.hbase Anoop Sam John added a comment - Doing another pass over the patch. Have some comments. Will complete today.
          Hide
          ben.manes Ben Manes added a comment -

          We can add it now if that's desired, or defer on whether to add or remove depending on the team's consensus. Neither the Lru nor TinyLfu caches use the field in their policies and I don't see how it provides meaningful information to users.

          Show
          ben.manes Ben Manes added a comment - We can add it now if that's desired, or defer on whether to add or remove depending on the team's consensus. Neither the Lru nor TinyLfu caches use the field in their policies and I don't see how it provides meaningful information to users.
          Hide
          yuzhihong@gmail.com Ted Yu added a comment -

          +1, pending QA results.

          Show
          yuzhihong@gmail.com Ted Yu added a comment - +1, pending QA results.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 5 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 2m 45s master passed
          +1 compile 3m 11s master passed
          +1 checkstyle 0m 26s master passed
          +1 mvneclipse 1m 32s master passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle .
          -1 findbugs 0m 32s hbase-common in master has 1 extant Findbugs warnings.
          +1 javadoc 2m 19s master passed
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 3m 43s the patch passed
          +1 compile 3m 12s the patch passed
          +1 javac 3m 12s the patch passed
          +1 checkstyle 0m 26s the patch passed
          +1 mvneclipse 1m 32s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 5s The patch has no ill-formed XML file.
          +1 hadoopcheck 24m 21s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1.
          +1 hbaseprotoc 1m 33s the patch passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle .
          +1 findbugs 2m 24s the patch passed
          +1 javadoc 2m 20s the patch passed
          +1 unit 0m 6s hbase-resource-bundle in the patch passed.
          +1 unit 1m 40s hbase-common in the patch passed.
          -1 unit 89m 36s hbase-server in the patch failed.
          -1 unit 86m 13s root in the patch failed.
          +1 asflicense 0m 48s The patch does not generate ASF License warnings.
          231m 36s



          Reason Tests
          Failed junit tests hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush
            hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush
          Timed out junit tests org.apache.hadoop.hbase.constraint.TestConstraint
            org.apache.hadoop.hbase.TestNamespace
            org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes
            org.apache.hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient
            org.apache.hadoop.hbase.security.access.TestWithDisabledAuthorization
            org.apache.hadoop.hbase.security.access.TestAccessController



          Subsystem Report/Notes
          Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12831260/HBASE-15560.patch
          JIRA Issue HBASE-15560
          Optional Tests asflicense javac javadoc unit xml compile findbugs hadoopcheck hbaseanti checkstyle
          uname Linux 76260aa87b56 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / b8ad9b1
          Default Java 1.8.0_101
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3795/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3795/artifact/patchprocess/patch-unit-hbase-server.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3795/artifact/patchprocess/patch-unit-root.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/3795/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/3795/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3795/testReport/
          modules C: hbase-resource-bundle hbase-common hbase-server . U: .
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3795/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 5 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 2m 45s master passed +1 compile 3m 11s master passed +1 checkstyle 0m 26s master passed +1 mvneclipse 1m 32s master passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle . -1 findbugs 0m 32s hbase-common in master has 1 extant Findbugs warnings. +1 javadoc 2m 19s master passed 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 3m 43s the patch passed +1 compile 3m 12s the patch passed +1 javac 3m 12s the patch passed +1 checkstyle 0m 26s the patch passed +1 mvneclipse 1m 32s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 5s The patch has no ill-formed XML file. +1 hadoopcheck 24m 21s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. +1 hbaseprotoc 1m 33s the patch passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle . +1 findbugs 2m 24s the patch passed +1 javadoc 2m 20s the patch passed +1 unit 0m 6s hbase-resource-bundle in the patch passed. +1 unit 1m 40s hbase-common in the patch passed. -1 unit 89m 36s hbase-server in the patch failed. -1 unit 86m 13s root in the patch failed. +1 asflicense 0m 48s The patch does not generate ASF License warnings. 231m 36s Reason Tests Failed junit tests hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush   hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush Timed out junit tests org.apache.hadoop.hbase.constraint.TestConstraint   org.apache.hadoop.hbase.TestNamespace   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes   org.apache.hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient   org.apache.hadoop.hbase.security.access.TestWithDisabledAuthorization   org.apache.hadoop.hbase.security.access.TestAccessController Subsystem Report/Notes Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12831260/HBASE-15560.patch JIRA Issue HBASE-15560 Optional Tests asflicense javac javadoc unit xml compile findbugs hadoopcheck hbaseanti checkstyle uname Linux 76260aa87b56 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / b8ad9b1 Default Java 1.8.0_101 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3795/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/3795/artifact/patchprocess/patch-unit-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/3795/artifact/patchprocess/patch-unit-root.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/3795/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/3795/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3795/testReport/ modules C: hbase-resource-bundle hbase-common hbase-server . U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3795/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          yuzhihong@gmail.com Ted Yu added a comment -

          TestHRegionWithInMemoryFlush was not modified by the patch - LRU cache is used.

          HBASE-16701 tracks this flaky test.

          Show
          yuzhihong@gmail.com Ted Yu added a comment - TestHRegionWithInMemoryFlush was not modified by the patch - LRU cache is used. HBASE-16701 tracks this flaky test.
          Hide
          yuzhihong@gmail.com Ted Yu added a comment -

          TestHRegionWithInMemoryFlush passes with patch locally.

          Show
          yuzhihong@gmail.com Ted Yu added a comment - TestHRegionWithInMemoryFlush passes with patch locally.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          0 mvndep 0m 25s Maven dependency ordering for branch
          +1 mvninstall 3m 13s master passed
          +1 compile 3m 24s master passed
          +1 checkstyle 0m 29s master passed
          +1 mvneclipse 1m 40s master passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle .
          -1 findbugs 0m 36s hbase-common in master has 1 extant Findbugs warnings.
          +1 javadoc 2m 30s master passed
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 4m 0s the patch passed
          +1 compile 3m 14s the patch passed
          +1 javac 3m 14s the patch passed
          +1 checkstyle 0m 28s the patch passed
          +1 mvneclipse 1m 34s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 5s The patch has no ill-formed XML file.
          +1 hadoopcheck 26m 17s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1.
          +1 hbaseprotoc 1m 41s the patch passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle .
          +1 findbugs 2m 35s the patch passed
          +1 javadoc 2m 31s the patch passed
          +1 unit 0m 6s hbase-resource-bundle in the patch passed.
          +1 unit 1m 44s hbase-common in the patch passed.
          -1 unit 92m 25s hbase-server in the patch failed.
          -1 unit 91m 1s root in the patch failed.
          +1 asflicense 0m 50s The patch does not generate ASF License warnings.
          243m 30s



          Reason Tests
          Failed junit tests hadoop.hbase.master.procedure.TestDispatchMergingRegionsProcedure
            hadoop.hbase.master.procedure.TestDispatchMergingRegionsProcedure
          Timed out junit tests org.apache.hadoop.hbase.client.TestFromClientSide
            org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
            org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
            org.apache.hadoop.hbase.client.TestAdmin2
            org.apache.hadoop.hbase.client.TestHCM
            org.apache.hadoop.hbase.client.TestSizeFailures



          Subsystem Report/Notes
          Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12831446/HBASE-15560.patch
          JIRA Issue HBASE-15560
          Optional Tests asflicense javac javadoc unit xml compile findbugs hadoopcheck hbaseanti checkstyle
          uname Linux 575a2bd027e0 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / cc237c4
          Default Java 1.8.0_101
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3803/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3803/artifact/patchprocess/patch-unit-hbase-server.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/3803/artifact/patchprocess/patch-unit-root.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/3803/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/3803/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3803/testReport/
          modules C: hbase-resource-bundle hbase-common hbase-server . U: .
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3803/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. 0 mvndep 0m 25s Maven dependency ordering for branch +1 mvninstall 3m 13s master passed +1 compile 3m 24s master passed +1 checkstyle 0m 29s master passed +1 mvneclipse 1m 40s master passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle . -1 findbugs 0m 36s hbase-common in master has 1 extant Findbugs warnings. +1 javadoc 2m 30s master passed 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 4m 0s the patch passed +1 compile 3m 14s the patch passed +1 javac 3m 14s the patch passed +1 checkstyle 0m 28s the patch passed +1 mvneclipse 1m 34s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 5s The patch has no ill-formed XML file. +1 hadoopcheck 26m 17s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. +1 hbaseprotoc 1m 41s the patch passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hbase-resource-bundle . +1 findbugs 2m 35s the patch passed +1 javadoc 2m 31s the patch passed +1 unit 0m 6s hbase-resource-bundle in the patch passed. +1 unit 1m 44s hbase-common in the patch passed. -1 unit 92m 25s hbase-server in the patch failed. -1 unit 91m 1s root in the patch failed. +1 asflicense 0m 50s The patch does not generate ASF License warnings. 243m 30s Reason Tests Failed junit tests hadoop.hbase.master.procedure.TestDispatchMergingRegionsProcedure   hadoop.hbase.master.procedure.TestDispatchMergingRegionsProcedure Timed out junit tests org.apache.hadoop.hbase.client.TestFromClientSide   org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient   org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence   org.apache.hadoop.hbase.client.TestAdmin2   org.apache.hadoop.hbase.client.TestHCM   org.apache.hadoop.hbase.client.TestSizeFailures Subsystem Report/Notes Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12831446/HBASE-15560.patch JIRA Issue HBASE-15560 Optional Tests asflicense javac javadoc unit xml compile findbugs hadoopcheck hbaseanti checkstyle uname Linux 575a2bd027e0 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / cc237c4 Default Java 1.8.0_101 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/3803/artifact/patchprocess/branch-findbugs-hbase-common-warnings.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/3803/artifact/patchprocess/patch-unit-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/3803/artifact/patchprocess/patch-unit-root.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/3803/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/3803/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/3803/testReport/ modules C: hbase-resource-bundle hbase-common hbase-server . U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/3803/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          stack stack added a comment - - edited

          I am -1 on this patch until there has been a perf compare of before vs. after (I note this as a prereq. on commit a few times in the comments above). I don't see a compare here. Please revert until that has happened. I just got my test cluster back and intend trying this over next few days unless someone else beats me to it.

          Show
          stack stack added a comment - - edited I am -1 on this patch until there has been a perf compare of before vs. after (I note this as a prereq. on commit a few times in the comments above). I don't see a compare here. Please revert until that has happened. I just got my test cluster back and intend trying this over next few days unless someone else beats me to it.
          Hide
          stack stack added a comment -

          Ted Yu See above.

          Show
          stack stack added a comment - Ted Yu See above.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1725 (See https://builds.apache.org/job/HBase-Trunk_matrix/1725/)
          HBASE-15560 TinyLFU-based BlockCache (Ben Manes) (tedyu: rev 9e0c2562a95638600781cb894c0ae7bb404573ca)

          • (add) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java
          • (edit) hbase-common/src/main/resources/hbase-default.xml
          • (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
          • (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          • (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/InclusiveCombinedBlockCache.java
          • (edit) hbase-resource-bundle/src/main/resources/supplemental-models.xml
          • (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java
          • (edit) pom.xml
          • (add) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestTinyLfuBlockCache.java
          • (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
          • (edit) hbase-server/pom.xml
          • (add) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FirstLevelBlockCache.java
          • (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1725 (See https://builds.apache.org/job/HBase-Trunk_matrix/1725/ ) HBASE-15560 TinyLFU-based BlockCache (Ben Manes) (tedyu: rev 9e0c2562a95638600781cb894c0ae7bb404573ca) (add) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java (edit) hbase-common/src/main/resources/hbase-default.xml (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/InclusiveCombinedBlockCache.java (edit) hbase-resource-bundle/src/main/resources/supplemental-models.xml (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java (edit) pom.xml (add) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestTinyLfuBlockCache.java (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java (edit) hbase-server/pom.xml (add) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FirstLevelBlockCache.java (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
          Hide
          ben.manes Ben Manes added a comment -

          As noted previously, please try with a real workload rather than the a synthetic. When Eshcar Hillel and I tried, we found that Lru was already optimized for YCSB making the difference negligible. Given the paper's real-world traces and Druid's experiences (1, 2), TinyLFU appears promising.

          Show
          ben.manes Ben Manes added a comment - As noted previously, please try with a real workload rather than the a synthetic. When Eshcar Hillel and I tried, we found that Lru was already optimized for YCSB making the difference negligible. Given the paper's real-world traces and Druid's experiences ( 1 , 2 ), TinyLFU appears promising.
          Hide
          stack stack added a comment -

          Thanks Ben Manes Will try w/ some variance. I just want to confirm that there is no regression or if there is a tax when all is out of cache, that it is small or at least quantifiable. As is, we've committed a change to a core piece of our serving w/o clue as to what it does performance-wise.

          Show
          stack stack added a comment - Thanks Ben Manes Will try w/ some variance. I just want to confirm that there is no regression or if there is a tax when all is out of cache, that it is small or at least quantifiable. As is, we've committed a change to a core piece of our serving w/o clue as to what it does performance-wise.
          Hide
          anoop.hbase Anoop Sam John added a comment -

          Old LRU cache only the default.
          BTW we will have to add some release notes to the issue on how to enable the new L1 cache

          Show
          anoop.hbase Anoop Sam John added a comment - Old LRU cache only the default. BTW we will have to add some release notes to the issue on how to enable the new L1 cache
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1726 (See https://builds.apache.org/job/HBase-Trunk_matrix/1726/)
          HBASE-15560 TinyLFU-based BlockCache - revert pending performance (tedyu: rev b952e64751d309e920bf6e44caa2b3d5801e3be8)

          • (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
          • (delete) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestTinyLfuBlockCache.java
          • (edit) hbase-server/pom.xml
          • (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
          • (delete) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java
          • (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          • (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/InclusiveCombinedBlockCache.java
          • (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java
          • (edit) hbase-common/src/main/resources/hbase-default.xml
          • (delete) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FirstLevelBlockCache.java
          • (edit) pom.xml
          • (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
          • (edit) hbase-resource-bundle/src/main/resources/supplemental-models.xml
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1726 (See https://builds.apache.org/job/HBase-Trunk_matrix/1726/ ) HBASE-15560 TinyLFU-based BlockCache - revert pending performance (tedyu: rev b952e64751d309e920bf6e44caa2b3d5801e3be8) (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java (delete) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestTinyLfuBlockCache.java (edit) hbase-server/pom.xml (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java (delete) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/InclusiveCombinedBlockCache.java (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java (edit) hbase-common/src/main/resources/hbase-default.xml (delete) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FirstLevelBlockCache.java (edit) pom.xml (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java (edit) hbase-resource-bundle/src/main/resources/supplemental-models.xml
          Hide
          stack stack added a comment -

          If you are saying that the old cache remains the default, which seems to be the case looking at the patch, another -1 on top of the above -1. Either TinyLRU is better and it should be enabled by default or let us not bother our operators and users w/ exotic choices such as which LRU algo to use.

          Show
          stack stack added a comment - If you are saying that the old cache remains the default, which seems to be the case looking at the patch, another -1 on top of the above -1. Either TinyLRU is better and it should be enabled by default or let us not bother our operators and users w/ exotic choices such as which LRU algo to use.
          Hide
          ben.manes Ben Manes added a comment -

          This flag was to simplify evaluation. It can either be retained to allow a gradual rollout, e.g. feature flag in case users discover concerns, or removed. For providing a patch it seemed most respectful, on my part, to not try to force a switch. I'm fine removing the configuration once the team is confident in adopting the new policy.

          Show
          ben.manes Ben Manes added a comment - This flag was to simplify evaluation. It can either be retained to allow a gradual rollout, e.g. feature flag in case users discover concerns, or removed. For providing a patch it seemed most respectful, on my part, to not try to force a switch. I'm fine removing the configuration once the team is confident in adopting the new policy.
          Hide
          stack stack added a comment -

          This flag was to simplify evaluation.

          Understood. Thanks for taking that tack.

          Some background. In hbase and hadoop, the code base gets loaded up w/ options. The result is code that gets no exercise and operators who are confused by the plethora of possibilities. To me, putting a feature behind a flag indicates: little to no testing (let the 'user' do the testing) and the feature is destined to rot because it not used.

          This feature looks great. It should be on by default but it is in a tender area so we should be able to say when it shines and when it might cost the user a little perf. We owe that much to you the contributor and to our users.

          Thanks Ben.

          Show
          stack stack added a comment - This flag was to simplify evaluation. Understood. Thanks for taking that tack. Some background. In hbase and hadoop, the code base gets loaded up w/ options. The result is code that gets no exercise and operators who are confused by the plethora of possibilities. To me, putting a feature behind a flag indicates: little to no testing (let the 'user' do the testing) and the feature is destined to rot because it not used. This feature looks great. It should be on by default but it is in a tender area so we should be able to say when it shines and when it might cost the user a little perf. We owe that much to you the contributor and to our users. Thanks Ben.
          Hide
          ben.manes Ben Manes added a comment -

          I know the frustration and agree that feature flags should have a clear deprecation cycle. You might want to consider a special deprecation annotation indicating the release (or date) that a flag should be removed by. A custom checkstyle / pmd rule would be easy to write and allow for validating in the build. If the flag is rot due to lack of testing then it pushes for a decision to be made.

          In general I would have performance tested this myself, but due to not being an HBase user that would be meaningless. Its been fun to provide the patch and work through the process, as requested by Edward Bortnikov, but I do need help on that last mile. So I am looking forward to digging into the results when we have some hard numbers.

          Show
          ben.manes Ben Manes added a comment - I know the frustration and agree that feature flags should have a clear deprecation cycle. You might want to consider a special deprecation annotation indicating the release (or date) that a flag should be removed by. A custom checkstyle / pmd rule would be easy to write and allow for validating in the build. If the flag is rot due to lack of testing then it pushes for a decision to be made. In general I would have performance tested this myself, but due to not being an HBase user that would be meaningless. Its been fun to provide the patch and work through the process, as requested by Edward Bortnikov , but I do need help on that last mile. So I am looking forward to digging into the results when we have some hard numbers.
          Hide
          stack stack added a comment -

          Thanks. Yeah, we owe you the last mile. Let us do that.

          Ted Yu There is a -1 against this patch.

          Show
          stack stack added a comment - Thanks. Yeah, we owe you the last mile. Let us do that. Ted Yu There is a -1 against this patch.
          Hide
          yuzhihong@gmail.com Ted Yu added a comment -

          This was reverted in the morning.

          I have been in a support call most time of the day.

          Show
          yuzhihong@gmail.com Ted Yu added a comment - This was reverted in the morning. I have been in a support call most time of the day.
          Hide
          eshcar Eshcar Hillel added a comment -

          Hi, can I add my view of this issue ?

          I think the gap between what is required by the community and what can be provided is not that big.

          1) Ben Manes you already have the results of the YCSB benchmark you ran with the initial patch.
          Can you rerun these tests with the latest patch and publish the results in some form.
          I suggest you publish the exact settings you used plus raw results (rather than lift).
          You can either present a comparison table of the mean latency + high (95th/99th) percentiles, over different cache sizes, or depict the dynamic of the latency throughout the run in a graph (by using the '-s' flag – I can explain offline), or best do both.
          If you dig in the region server log you can find records of the hit ratio, which you can also depict alongside the latency; could be nice to see.
          This results would show that when combining HBase and Caffeine there is no overhead and in some cases a measurable benefit, even in synthetic workloads.

          2) stack if the results of these experiments would satisfy the community then the default can be switched to TinyLFU, with LRU being optional and pushed to master. This would allow the community to further experiment with this feature more easily, and to modify it if necessary.

          3) Ben briefly described the results of the benchmarks when using a static distribution. Here is my explanation of the results (Ben feel free to correct me if I'm wrong):
          The distribution of the items is skewed but static with a small (high frequency) head and a long (low frequency) tail.
          With a given cache size – after the cache is warm – the items at the head feel the second segment (which is 80% of the cache in TinyLFU) and the following items feel the first segment.
          With LRU from time to time items from the tail of the distribution cause eviction from the first segment which is later translated to cache misses and increased latency, while TinyLFU tends to keep items with higher frequency in the cache, which results in less misses. As the size of the cache grows less and less items are evicted from the cache and the difference diminishes.
          With dynamic distribution items are continuously evicted from the cache and here the benefit of TinyLFU should be clearly pronounced.
          We have traces of production workloads that would potentially have skewed dynamic probability.
          However, we can neither share them and currently don't have the resources to turn them into a running benchmark.
          We could try to make an effort at this direction if this becomes a make-it-or-break-it point.

          Would this be acceptable: 1) Ben Manes running static YCSB benchmark; 2) stack commit TinyLFU as a default; 3) benchmark with dynamic workloads, either by us or others in the community.

          Show
          eshcar Eshcar Hillel added a comment - Hi, can I add my view of this issue ? I think the gap between what is required by the community and what can be provided is not that big. 1) Ben Manes you already have the results of the YCSB benchmark you ran with the initial patch. Can you rerun these tests with the latest patch and publish the results in some form. I suggest you publish the exact settings you used plus raw results (rather than lift). You can either present a comparison table of the mean latency + high (95th/99th) percentiles, over different cache sizes, or depict the dynamic of the latency throughout the run in a graph (by using the '-s' flag – I can explain offline), or best do both. If you dig in the region server log you can find records of the hit ratio, which you can also depict alongside the latency; could be nice to see. This results would show that when combining HBase and Caffeine there is no overhead and in some cases a measurable benefit, even in synthetic workloads. 2) stack if the results of these experiments would satisfy the community then the default can be switched to TinyLFU, with LRU being optional and pushed to master. This would allow the community to further experiment with this feature more easily, and to modify it if necessary. 3) Ben briefly described the results of the benchmarks when using a static distribution. Here is my explanation of the results (Ben feel free to correct me if I'm wrong): The distribution of the items is skewed but static with a small (high frequency) head and a long (low frequency) tail. With a given cache size – after the cache is warm – the items at the head feel the second segment (which is 80% of the cache in TinyLFU) and the following items feel the first segment. With LRU from time to time items from the tail of the distribution cause eviction from the first segment which is later translated to cache misses and increased latency, while TinyLFU tends to keep items with higher frequency in the cache, which results in less misses. As the size of the cache grows less and less items are evicted from the cache and the difference diminishes. With dynamic distribution items are continuously evicted from the cache and here the benefit of TinyLFU should be clearly pronounced. We have traces of production workloads that would potentially have skewed dynamic probability. However, we can neither share them and currently don't have the resources to turn them into a running benchmark. We could try to make an effort at this direction if this becomes a make-it-or-break-it point. Would this be acceptable: 1) Ben Manes running static YCSB benchmark; 2) stack commit TinyLFU as a default; 3) benchmark with dynamic workloads, either by us or others in the community.
          Hide
          stack stack added a comment -

          Thanks Eshcar Hillel

          I was going to do a basic #1 but if Ben did it, that'd be great too. Just looking to see that no radical regression and that there some semblance of benefit to be had moving to the new algo in the general case (YCSB, in the absence of anything better represents 'general' case). If #1, lets do #2. It is good that fallback is easy if an issue but lets go w/ tinylru if generally better rather than mess around.

          Show
          stack stack added a comment - Thanks Eshcar Hillel I was going to do a basic #1 but if Ben did it, that'd be great too. Just looking to see that no radical regression and that there some semblance of benefit to be had moving to the new algo in the general case (YCSB, in the absence of anything better represents 'general' case). If #1, lets do #2. It is good that fallback is easy if an issue but lets go w/ tinylru if generally better rather than mess around.
          Hide
          ben.manes Ben Manes added a comment -

          I can take another stab at (1) and work with Eshcar Hillel to ensure its validity. I should have time over this upcoming weekend.

          Like the paper's simulations, I can also run an anonymized trace to calculate the hit/miss curves of the two policies. The trace file would be a sequence of cache key hashes on a cache.get() call. While not taking into account entry sizes, it should tell us if the policy improves the efficiency in a realistic workload. That lends itself to being able to estimate the new response times, assuming all else is equal. Would an anonymized access trace be easy to acquire and share?

          Show
          ben.manes Ben Manes added a comment - I can take another stab at (1) and work with Eshcar Hillel to ensure its validity. I should have time over this upcoming weekend. Like the paper's simulations, I can also run an anonymized trace to calculate the hit/miss curves of the two policies. The trace file would be a sequence of cache key hashes on a cache.get() call. While not taking into account entry sizes, it should tell us if the policy improves the efficiency in a realistic workload. That lends itself to being able to estimate the new response times, assuming all else is equal. Would an anonymized access trace be easy to acquire and share?
          Hide
          eshcar Eshcar Hillel added a comment -

          No unfortunately we currently don't have anonymized traces to share.
          But let's start with step (1) and continue from there. I think when the cache is small/medium size we can get interesting results even with YCSB synthetic workloads.

          Show
          eshcar Eshcar Hillel added a comment - No unfortunately we currently don't have anonymized traces to share. But let's start with step (1) and continue from there. I think when the cache is small/medium size we can get interesting results even with YCSB synthetic workloads.
          Hide
          ben.manes Ben Manes added a comment -

          1. I performed "git revert b952e64"
          2. Configured YCSB workload B with the settings,

          recordcount=100000
          operationcount=1000000
          

          3. Started hbase server with the hbase-site.xml configuration,

          <property>
           <name>hfile.block.cache.size</name>
           <value>0.1f</value>
          </property>
          <property>
           <name>hbase.regionserver.global.memstore.size</name>
           <value>0.7f</value>
          </property>
          <property>
           <name>hfile.block.cache.policy</name>
           <value>Lru</value>
          </property>
          

          4. Loaded and ran ycsb with Lru and TinyLfu.

          LruBlockCache

          totalSize=96.67 MB, freeSize=2.32 MB, max=98.99 MB, blockCount=1793, 
          accesses=4766387, hits=4081322, hitRatio=85.63%, 
          cachingAccesses=4764133, cachingHits=4081322, cachingHitsRatio=85.67%, 
          evictions=10402, evicted=681017, evictedPerRun=65.46981349740435
          
          [OVERALL], RunTime(ms), 189753.0
          [OVERALL], Throughput(ops/sec), 5270.008906315052
          [TOTAL_GCS_PS_Scavenge], Count, 717.0
          [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 730.0
          [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.38471065016099876
          [TOTAL_GCS_PS_MarkSweep], Count, 0.0
          [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0.0
          [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
          [TOTAL_GCs], Count, 717.0
          [TOTAL_GC_TIME], Time(ms), 730.0
          [TOTAL_GC_TIME_%], Time(%), 0.38471065016099876
          [READ], Operations, 950125.0
          [READ], AverageLatency(us), 152.8599626364952
          [READ], MinLatency(us), 76.0
          [READ], MaxLatency(us), 60959.0
          [READ], 95thPercentileLatency(us), 215.0
          [READ], 99thPercentileLatency(us), 253.0
          [READ], Return=OK, 950125
          [CLEANUP], Operations, 2.0
          [CLEANUP], AverageLatency(us), 72164.0
          [CLEANUP], MinLatency(us), 8.0
          [CLEANUP], MaxLatency(us), 144383.0
          [CLEANUP], 95thPercentileLatency(us), 144383.0
          [CLEANUP], 99thPercentileLatency(us), 144383.0
          [UPDATE], Operations, 49875.0
          [UPDATE], AverageLatency(us), 215.8185664160401
          [UPDATE], MinLatency(us), 125.0
          [UPDATE], MaxLatency(us), 36159.0
          [UPDATE], 95thPercentileLatency(us), 294.0
          [UPDATE], 99thPercentileLatency(us), 484.0
          [UPDATE], Return=OK, 49875
          

          TinyLfuBlockCache

          totalSize=98.98 MB, freeSize=4.07 KB, max=98.99 MB, blockCount=2112,
          accesses=4170109, hits=3794003, hitRatio=90.98%, 
          cachingAccesses=4170112, cachingHits=3794005, cachingHitsRatio=90.98%, 
          evictions=373994, evicted=37399
          
          [OVERALL], RunTime(ms), 118390.0
          [OVERALL], Throughput(ops/sec), 8446.659346228567
          [TOTAL_GCS_PS_Scavenge], Count, 664.0
          [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 714.0
          [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.6030914773207197
          [TOTAL_GCS_PS_MarkSweep], Count, 0.0
          [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0.0
          [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
          [TOTAL_GCs], Count, 664.0
          [TOTAL_GC_TIME], Time(ms), 714.0
          [TOTAL_GC_TIME_%], Time(%), 0.6030914773207197
          [READ], Operations, 949956.0
          [READ], AverageLatency(us), 112.233432916893
          [READ], MinLatency(us), 75.0
          [READ], MaxLatency(us), 61151.0
          [READ], 95thPercentileLatency(us), 165.0
          [READ], 99thPercentileLatency(us), 204.0
          [READ], Return=OK, 949956
          [CLEANUP], Operations, 2.0
          [CLEANUP], AverageLatency(us), 59732.0
          [CLEANUP], MinLatency(us), 8.0
          [CLEANUP], MaxLatency(us), 119487.0
          [CLEANUP], 95thPercentileLatency(us), 119487.0
          [CLEANUP], 99thPercentileLatency(us), 119487.0
          [UPDATE], Operations, 50044.0
          [UPDATE], AverageLatency(us), 188.9981216529454
          [UPDATE], MinLatency(us), 122.0
          [UPDATE], MaxLatency(us), 36671.0
          [UPDATE], 95thPercentileLatency(us), 257.0
          [UPDATE], 99thPercentileLatency(us), 489.0
          [UPDATE], Return=OK, 50044
          
          Show
          ben.manes Ben Manes added a comment - 1. I performed "git revert b952e64" 2. Configured YCSB workload B with the settings, recordcount=100000 operationcount=1000000 3. Started hbase server with the hbase-site.xml configuration, <property> <name> hfile.block.cache.size </name> <value> 0.1f </value> </property> <property> <name> hbase.regionserver.global.memstore.size </name> <value> 0.7f </value> </property> <property> <name> hfile.block.cache.policy </name> <value> Lru </value> </property> 4. Loaded and ran ycsb with Lru and TinyLfu. LruBlockCache totalSize=96.67 MB, freeSize=2.32 MB, max=98.99 MB, blockCount=1793, accesses=4766387, hits=4081322, hitRatio=85.63%, cachingAccesses=4764133, cachingHits=4081322, cachingHitsRatio=85.67%, evictions=10402, evicted=681017, evictedPerRun=65.46981349740435 [OVERALL], RunTime(ms), 189753.0 [OVERALL], Throughput(ops/sec), 5270.008906315052 [TOTAL_GCS_PS_Scavenge], Count, 717.0 [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 730.0 [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.38471065016099876 [TOTAL_GCS_PS_MarkSweep], Count, 0.0 [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0.0 [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0 [TOTAL_GCs], Count, 717.0 [TOTAL_GC_TIME], Time(ms), 730.0 [TOTAL_GC_TIME_%], Time(%), 0.38471065016099876 [READ], Operations, 950125.0 [READ], AverageLatency(us), 152.8599626364952 [READ], MinLatency(us), 76.0 [READ], MaxLatency(us), 60959.0 [READ], 95thPercentileLatency(us), 215.0 [READ], 99thPercentileLatency(us), 253.0 [READ], Return=OK, 950125 [CLEANUP], Operations, 2.0 [CLEANUP], AverageLatency(us), 72164.0 [CLEANUP], MinLatency(us), 8.0 [CLEANUP], MaxLatency(us), 144383.0 [CLEANUP], 95thPercentileLatency(us), 144383.0 [CLEANUP], 99thPercentileLatency(us), 144383.0 [UPDATE], Operations, 49875.0 [UPDATE], AverageLatency(us), 215.8185664160401 [UPDATE], MinLatency(us), 125.0 [UPDATE], MaxLatency(us), 36159.0 [UPDATE], 95thPercentileLatency(us), 294.0 [UPDATE], 99thPercentileLatency(us), 484.0 [UPDATE], Return=OK, 49875 TinyLfuBlockCache totalSize=98.98 MB, freeSize=4.07 KB, max=98.99 MB, blockCount=2112, accesses=4170109, hits=3794003, hitRatio=90.98%, cachingAccesses=4170112, cachingHits=3794005, cachingHitsRatio=90.98%, evictions=373994, evicted=37399 [OVERALL], RunTime(ms), 118390.0 [OVERALL], Throughput(ops/sec), 8446.659346228567 [TOTAL_GCS_PS_Scavenge], Count, 664.0 [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 714.0 [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.6030914773207197 [TOTAL_GCS_PS_MarkSweep], Count, 0.0 [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0.0 [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0 [TOTAL_GCs], Count, 664.0 [TOTAL_GC_TIME], Time(ms), 714.0 [TOTAL_GC_TIME_%], Time(%), 0.6030914773207197 [READ], Operations, 949956.0 [READ], AverageLatency(us), 112.233432916893 [READ], MinLatency(us), 75.0 [READ], MaxLatency(us), 61151.0 [READ], 95thPercentileLatency(us), 165.0 [READ], 99thPercentileLatency(us), 204.0 [READ], Return=OK, 949956 [CLEANUP], Operations, 2.0 [CLEANUP], AverageLatency(us), 59732.0 [CLEANUP], MinLatency(us), 8.0 [CLEANUP], MaxLatency(us), 119487.0 [CLEANUP], 95thPercentileLatency(us), 119487.0 [CLEANUP], 99thPercentileLatency(us), 119487.0 [UPDATE], Operations, 50044.0 [UPDATE], AverageLatency(us), 188.9981216529454 [UPDATE], MinLatency(us), 122.0 [UPDATE], MaxLatency(us), 36671.0 [UPDATE], 95thPercentileLatency(us), 257.0 [UPDATE], 99thPercentileLatency(us), 489.0 [UPDATE], Return=OK, 50044
          Hide
          stack stack added a comment -

          Did all fit in memory when you ran this test Ben Manes? Or was there cache misses? It looks like tinylfu did better in your test? Thanks.

          Show
          stack stack added a comment - Did all fit in memory when you ran this test Ben Manes ? Or was there cache misses? It looks like tinylfu did better in your test? Thanks.
          Hide
          ben.manes Ben Manes added a comment -

          YCSB workload B states that each record is 1kb, so that is about 100mb (97mib). That's probably introduces some misses due to Java object overhead. Since LruBlockCache uses a high watermark and evicts to a low watermark, it could be aggressively under utilizing the capacity. So a higher hit rate might be understandable, in addition to the workload pattern's characteristics.

          Show
          ben.manes Ben Manes added a comment - YCSB workload B states that each record is 1kb, so that is about 100mb (97mib). That's probably introduces some misses due to Java object overhead. Since LruBlockCache uses a high watermark and evicts to a low watermark, it could be aggressively under utilizing the capacity. So a higher hit rate might be understandable, in addition to the workload pattern's characteristics.
          Hide
          stack stack added a comment -

          Thanks Ben. Let me try it here. Should be able in next day or so.

          Show
          stack stack added a comment - Thanks Ben. Let me try it here. Should be able in next day or so.
          Hide
          eshcar Eshcar Hillel added a comment -

          Also, the request distribution is zipfian. Memstore is flushed to disk at 128MB then it is compacted (removing duplicates) and compressed creating a file of 60-80MB (Ben Manes you can verify this in your logs), second flush creates a new file; at this point total size of files is more than the cache size. The third flush triggers a compaction resulting in a single file of less than 100MB (again, due to removing duplicates and compression), and so on and so forth.
          With 1M operations you have about 6-7 flushes and about 3 compactions on the disk. So about 50% of the execution time data can fit in memory (cache) and 50% of the time it cannot fit into the cache.
          I would say this scenario demonstrates the benefit of tinylfu over lru: 90% hit rate vs 85% hit rate, ~30% improvement in mean read latency, and 20-25% improvement in tail latency (95-99th percentiles).
          However, I can't explain the improvement in the update latency. Ben Manes can you explain this? Have you ever measured update latency in your previous work?

          Show
          eshcar Eshcar Hillel added a comment - Also, the request distribution is zipfian. Memstore is flushed to disk at 128MB then it is compacted (removing duplicates) and compressed creating a file of 60-80MB ( Ben Manes you can verify this in your logs), second flush creates a new file; at this point total size of files is more than the cache size. The third flush triggers a compaction resulting in a single file of less than 100MB (again, due to removing duplicates and compression), and so on and so forth. With 1M operations you have about 6-7 flushes and about 3 compactions on the disk. So about 50% of the execution time data can fit in memory (cache) and 50% of the time it cannot fit into the cache. I would say this scenario demonstrates the benefit of tinylfu over lru: 90% hit rate vs 85% hit rate, ~30% improvement in mean read latency, and 20-25% improvement in tail latency (95-99th percentiles). However, I can't explain the improvement in the update latency. Ben Manes can you explain this? Have you ever measured update latency in your previous work?
          Hide
          anoop.hbase Anoop Sam John added a comment -

          Can we test and compare both when the cache size is enough to hold full data in it? No cache evictions.

          Show
          anoop.hbase Anoop Sam John added a comment - Can we test and compare both when the cache size is enough to hold full data in it? No cache evictions.
          Hide
          ben.manes Ben Manes added a comment - - edited

          The update latencies, except for average, were very similar. Since presumably not all entries fit in the cache then an update of a miss would trigger an eviction. It could be impact from the O(n lg n) Lru eviction thread, GC, or more coarse grained locking. Since this was run on a macbook rather than an isolated server, it could also be a background daemon kicking in. I think the important take away is not the absolute but that they are in the same ballpark. There isn't an outlier indicating the new implementation has a major degredation, e.g. due to locking or hit rates.

          Eshcar Hillel: To more directly answer your question, the update cost is very close to ConcurrentHashMap. This is because the locking overhead dominates, leaving enough spare cpu cycles to mask any other penalties being processed asynchronously.

          Anoop Sam John In my original post the results mentioned were probably with no evictions. Because LruBlockCache penalizes only the eviction, whereas Caffeine spreads it out, one would expect Lru to have an advantage. But by Amdahl's law the potential speedup is very tiny, so it falls into the noise. A fresh test would be good.

          Show
          ben.manes Ben Manes added a comment - - edited The update latencies, except for average, were very similar. Since presumably not all entries fit in the cache then an update of a miss would trigger an eviction. It could be impact from the O(n lg n) Lru eviction thread, GC, or more coarse grained locking. Since this was run on a macbook rather than an isolated server, it could also be a background daemon kicking in. I think the important take away is not the absolute but that they are in the same ballpark. There isn't an outlier indicating the new implementation has a major degredation, e.g. due to locking or hit rates. Eshcar Hillel : To more directly answer your question, the update cost is very close to ConcurrentHashMap. This is because the locking overhead dominates, leaving enough spare cpu cycles to mask any other penalties being processed asynchronously. Anoop Sam John In my original post the results mentioned were probably with no evictions. Because LruBlockCache penalizes only the eviction, whereas Caffeine spreads it out, one would expect Lru to have an advantage. But by Amdahl's law the potential speedup is very tiny, so it falls into the noise. A fresh test would be good.
          Hide
          ben.manes Ben Manes added a comment -

          stack, will you have time to test this soon?

          Show
          ben.manes Ben Manes added a comment - stack , will you have time to test this soon?
          Hide
          stack stack added a comment -

          My bad. Yes. I MUST do this.

          Show
          stack stack added a comment - My bad. Yes. I MUST do this.
          Hide
          stack stack added a comment -

          I tried this patch. It looks good. Seems faithful replacement of our old lrublockcache except for the part where it does not reproduce our partitiioning of the cache (e.g. inmemory markings on columnfamily are just ignored now). In a follow-on we should do cleanup in doc to note that inmemory is lrublockcache specific. Metrics look right.

          I tried it under a few loadings and it seems to do worse (YCSB zipfian). See attached graphs. I'm probably doing something wrong. Help me out Ben Manes

          Show
          stack stack added a comment - I tried this patch. It looks good. Seems faithful replacement of our old lrublockcache except for the part where it does not reproduce our partitiioning of the cache (e.g. inmemory markings on columnfamily are just ignored now). In a follow-on we should do cleanup in doc to note that inmemory is lrublockcache specific. Metrics look right. I tried it under a few loadings and it seems to do worse (YCSB zipfian). See attached graphs. I'm probably doing something wrong. Help me out Ben Manes
          Hide
          stack stack added a comment -

          Here is the key to reading the diagrams from YCSB workload c (zipfian random reads).

          There are three diagrams. Each covers same time range. Each has 6 humps, three without the patch, then three with the tinylfu patch. One is Gets, one is block cache misses, and third is block cache hits (I had to separate the latter two because the hits were so much in excess of the misses).

          The first three humps are from loadings done against the tip of branch-1. The three humps are two runs where there a lot of cache misses (the data did not fit the cache – total heap was 8G) with one run where hits are mostly out of cache (heap was 31G).

          The last three humps are from loadings done against the tip of branch-1 with the patch backported. The three humps here are one run where lots of cache misses (8G heap), a run with even more cache misses (4G), and then a case where most all fits the heap (31G).

          Sorry the two runs are not exactly symmetric. Can fix that next time through. Config error on my part.

          What we can see is that tinylru seems to do better when near all fits in cache. We can do more throughput. It even starts to rise toward the end of the test run but overall is running at a higher rate. My guess is that tinylfu is just using more of the cache than lrublockcache and perhaps its smarts are showing when the rate starts to rise toward the tail of the test run.

          For the cases where we are missing cache, it does worse. This I cannot explain.

          There is little i/o when we miss cache (we seem to be getting blocks from fscache). All blocks are local. This is a single RegionServer standing on top of an HDFS cluster of 8 nodes.

          Pointers appreciated.

          Show
          stack stack added a comment - Here is the key to reading the diagrams from YCSB workload c (zipfian random reads). There are three diagrams. Each covers same time range. Each has 6 humps, three without the patch, then three with the tinylfu patch. One is Gets, one is block cache misses, and third is block cache hits (I had to separate the latter two because the hits were so much in excess of the misses). The first three humps are from loadings done against the tip of branch-1. The three humps are two runs where there a lot of cache misses (the data did not fit the cache – total heap was 8G) with one run where hits are mostly out of cache (heap was 31G). The last three humps are from loadings done against the tip of branch-1 with the patch backported. The three humps here are one run where lots of cache misses (8G heap), a run with even more cache misses (4G), and then a case where most all fits the heap (31G). Sorry the two runs are not exactly symmetric. Can fix that next time through. Config error on my part. What we can see is that tinylru seems to do better when near all fits in cache. We can do more throughput. It even starts to rise toward the end of the test run but overall is running at a higher rate. My guess is that tinylfu is just using more of the cache than lrublockcache and perhaps its smarts are showing when the rate starts to rise toward the tail of the test run. For the cases where we are missing cache, it does worse. This I cannot explain. There is little i/o when we miss cache (we seem to be getting blocks from fscache). All blocks are local. This is a single RegionServer standing on top of an HDFS cluster of 8 nodes. Pointers appreciated.
          Hide
          ben.manes Ben Manes added a comment -

          Thanks stack! I won't be able to dig into this until the weekend. If I understand you right, the concern is that the throughput is lower for smaller caches? That would imply a lower hit rate, so even the low penalty would be observable when accumulated.

          Maybe there's a bug in how the new cache uses the "replace" flag or doesn't cache on a weight limit? Since the cache is weighted, it might also be that a block exceeds the size of the window cache so there are more compulsory misses. I'd really like to step through a test case, but not sure how I'd isolate and repeat your observations atm.

          Show
          ben.manes Ben Manes added a comment - Thanks stack ! I won't be able to dig into this until the weekend. If I understand you right, the concern is that the throughput is lower for smaller caches? That would imply a lower hit rate, so even the low penalty would be observable when accumulated. Maybe there's a bug in how the new cache uses the "replace" flag or doesn't cache on a weight limit? Since the cache is weighted, it might also be that a block exceeds the size of the window cache so there are more compulsory misses. I'd really like to step through a test case, but not sure how I'd isolate and repeat your observations atm.
          Hide
          stack stack added a comment -

          No rush Ben Manes I have a test bench up now so easy enough for me to try stuff. Concern is that tinylfu does worse than lrublockcache when lots of misses (less hits). I'd expect it do to better.

          Let me attach my patch here (I had to backport to branch-1; my tooling works w/ branch-1 only at mo). You can add any debug you like and I'll rerun. I can send you whatever. Thanks for the help Ben Manes I think it encouraging that it does better in one case at least.

          Show
          stack stack added a comment - No rush Ben Manes I have a test bench up now so easy enough for me to try stuff. Concern is that tinylfu does worse than lrublockcache when lots of misses (less hits). I'd expect it do to better. Let me attach my patch here (I had to backport to branch-1; my tooling works w/ branch-1 only at mo). You can add any debug you like and I'll rerun. I can send you whatever. Thanks for the help Ben Manes I think it encouraging that it does better in one case at least.
          Hide
          stack stack added a comment -

          My backport FYI. You can add LOG to this or just tell me what you'd like to see. Thanks Ben Manes

          Show
          stack stack added a comment - My backport FYI. You can add LOG to this or just tell me what you'd like to see. Thanks Ben Manes
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          0 patch 0m 5s The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for instructions.
          -1 patch 0m 7s HBASE-15560 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for help.



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12837285/branch-1.tinylfu.txt
          JIRA Issue HBASE-15560
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/4339/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. 0 patch 0m 5s The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for instructions. -1 patch 0m 7s HBASE-15560 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for help. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12837285/branch-1.tinylfu.txt JIRA Issue HBASE-15560 Console output https://builds.apache.org/job/PreCommit-HBASE-Build/4339/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          ben.manes Ben Manes added a comment -

          I think my first step is to get an access trace (log the key's hash) so that I can run it through my simulator. Then I can verify that the policy behaves well when weights are ignored.

          The next step would be to re-run it but assigning weights. Do these vary in your test? I think YCSB had static so they were the same per entry. If not, then the argument would be a potential bug when trying to be weight-aware when promoting between the window/probation/protected regions. Right now the simulator is not weight-aware, so I'd hack a quick test based on the trace data.

          If all of those are good, then we need to go line-by-line to see where the two caches differ. It could be a miscalculation of an entry's weight or other subtle bugs. We'll know more if we can isolate it down to a small, repeatable test case to debug with.

          Show
          ben.manes Ben Manes added a comment - I think my first step is to get an access trace (log the key's hash) so that I can run it through my simulator. Then I can verify that the policy behaves well when weights are ignored. The next step would be to re-run it but assigning weights. Do these vary in your test? I think YCSB had static so they were the same per entry. If not, then the argument would be a potential bug when trying to be weight-aware when promoting between the window/probation/protected regions. Right now the simulator is not weight-aware, so I'd hack a quick test based on the trace data. If all of those are good, then we need to go line-by-line to see where the two caches differ. It could be a miscalculation of an entry's weight or other subtle bugs. We'll know more if we can isolate it down to a small, repeatable test case to debug with.
          Hide
          stack stack added a comment -

          How do I do the 'access trace' Ben Manes Let me know how I do this so I can pass you what you need.

          How do I do 'weights' sir? I'm just doing ycsb workload c w/ zipfian flag.

          Show
          stack stack added a comment - How do I do the 'access trace' Ben Manes Let me know how I do this so I can pass you what you need. How do I do 'weights' sir? I'm just doing ycsb workload c w/ zipfian flag.
          Hide
          ben.manes Ben Manes added a comment -

          Sorry for not getting to this over the weekend. A bit of a family scare which had a happy ending.

          An access trace is a log of the key hashes on a get. Then I can replay them offline with the simulator. The "weight" of an entry in workloadc claims to be 1kb uniformly. I wasn't sure if they were going to vary, e.g with large and small across the distribution.

          I do have ycsb integrated into the simulator for synthetic distributions so perhaps I can try to reproduce your observations that way.

          Show
          ben.manes Ben Manes added a comment - Sorry for not getting to this over the weekend. A bit of a family scare which had a happy ending. An access trace is a log of the key hashes on a get . Then I can replay them offline with the simulator. The "weight" of an entry in workloadc claims to be 1kb uniformly. I wasn't sure if they were going to vary, e.g with large and small across the distribution. I do have ycsb integrated into the simulator for synthetic distributions so perhaps I can try to reproduce your observations that way.
          Hide
          stack stack added a comment -

          Sorry for not getting to this over the weekend. A bit of a family scare which had a happy ending.

          Good.

          An access trace is a log of the key hashes on a get. Then I can replay them offline with the simulator. The "weight" of an entry in workloadc claims to be 1kb uniformly. I wasn't sure if they were going to vary, e.g with large and small across the distribution.

          Would you want the same dataset loaded too?

          Show
          stack stack added a comment - Sorry for not getting to this over the weekend. A bit of a family scare which had a happy ending. Good. An access trace is a log of the key hashes on a get. Then I can replay them offline with the simulator. The "weight" of an entry in workloadc claims to be 1kb uniformly. I wasn't sure if they were going to vary, e.g with large and small across the distribution. Would you want the same dataset loaded too?
          Hide
          ben.manes Ben Manes added a comment -

          {{quote}
          Would you want the same dataset loaded too?
          quote

          That can't hurt, so unless its more work might as well.

          In my simulator, I tried to emulate workload c using the following configuration,

          • maximum-size = (below)
          • source = "synthetic"
          • distribution = "zipfian"
          • zipfian.items = 1000

          I then ran it with small caches to emulate your observation. LruBlockCache is an SLru variant, so I'm assuming it behaves similar to the theoretical version.

          Policy max=5 max=10 max=25
          Lru 13.10% 20.70% 35.60%
          SLru 25.90% 29.30 45.00%
          Caffeine 24.40% 32.30% 46.00%
          Optimal 35.20% 42.10% 45.50%

          We see that at the smallest size, 5, Caffeine slightly under performs. However whether its slightly lower, equal, or higher varies on the run. This is due to the distribution generation and Caffeine's hashing having randomness, so across runs we see it pretty much on par. As the size increases we see them all stay pretty close. Since SLru is known to be optimal for Zipf, this at least is a good sign but does not explain your observations.

          Show
          ben.manes Ben Manes added a comment - {{quote} Would you want the same dataset loaded too? quote That can't hurt, so unless its more work might as well. — In my simulator , I tried to emulate workload c using the following configuration, maximum-size = (below) source = "synthetic" distribution = "zipfian" zipfian.items = 1000 I then ran it with small caches to emulate your observation. LruBlockCache is an SLru variant, so I'm assuming it behaves similar to the theoretical version. Policy max=5 max=10 max=25 Lru 13.10% 20.70% 35.60% SLru 25.90% 29.30 45.00% Caffeine 24.40% 32.30% 46.00% Optimal 35.20% 42.10% 45.50% We see that at the smallest size, 5, Caffeine slightly under performs. However whether its slightly lower, equal, or higher varies on the run. This is due to the distribution generation and Caffeine's hashing having randomness, so across runs we see it pretty much on par. As the size increases we see them all stay pretty close. Since SLru is known to be optimal for Zipf, this at least is a good sign but does not explain your observations.
          Hide
          ben.manes Ben Manes added a comment -

          In the last run, Optimal is 55.50%. Sorry for the typo.

          Show
          ben.manes Ben Manes added a comment - In the last run, Optimal is 55.50% . Sorry for the typo.
          Hide
          stack stack added a comment -

          Right. How you going to simulate my case where lots of cache misses? Can i turn on a logging or something for you? Seems pretty useless sending you a bunch of access keys if you don't have same dataset loaded and same hw.

          Show
          stack stack added a comment - Right. How you going to simulate my case where lots of cache misses? Can i turn on a logging or something for you? Seems pretty useless sending you a bunch of access keys if you don't have same dataset loaded and same hw.
          Hide
          ben.manes Ben Manes added a comment -

          Are the access keys not in the data set so that it is not found? I assumed a miss means query the cache, load, store into the cache. If queried again, it should be a cache hit.

          If that's correct then the value has no meaning and the keys are the access distributions. Any surrogate, like a hash, will be representative. So using the same Zipf distribution should give us similar results.

          But I might be mistaking how the cache is used in HBase and evaluating it incorrectly in isolation

          Show
          ben.manes Ben Manes added a comment - Are the access keys not in the data set so that it is not found? I assumed a miss means query the cache, load, store into the cache. If queried again, it should be a cache hit. If that's correct then the value has no meaning and the keys are the access distributions. Any surrogate, like a hash, will be representative. So using the same Zipf distribution should give us similar results. But I might be mistaking how the cache is used in HBase and evaluating it incorrectly in isolation
          Hide
          stack stack added a comment -

          Are the access keys not in the data set so that it is not found?

          They should be true yes, you should be able to mimic my setup or just reproduce using YCSB against a running hbase instance.

          How you suggest we reconcile our different experience? What can I pass you or what do you want me to look at? Thanks.

          Show
          stack stack added a comment - Are the access keys not in the data set so that it is not found? They should be true yes, you should be able to mimic my setup or just reproduce using YCSB against a running hbase instance. How you suggest we reconcile our different experience? What can I pass you or what do you want me to look at? Thanks.
          Hide
          carp84 Yu Li added a comment -

          Interesting one and nice work Ben Manes.

          One question here for the comparison result: any tracking on the RS-side CPU cost during the test? If so, mind share the data? Thanks. Ben Manes stack

          Show
          carp84 Yu Li added a comment - Interesting one and nice work Ben Manes . One question here for the comparison result: any tracking on the RS-side CPU cost during the test? If so, mind share the data? Thanks. Ben Manes stack
          Hide
          stack stack added a comment -

          No problem Yu Li I'm on something else but can put up associated cpu usage for above graphs when reconfigure...

          Show
          stack stack added a comment - No problem Yu Li I'm on something else but can put up associated cpu usage for above graphs when reconfigure...
          Hide
          ben.manes Ben Manes added a comment -

          If you can give me the steps to reproduce your observation on HBase then I'll try to debug it locally. That way I don't keep you in limbo.

          Show
          ben.manes Ben Manes added a comment - If you can give me the steps to reproduce your observation on HBase then I'll try to debug it locally. That way I don't keep you in limbo.
          Hide
          stack stack added a comment -

          I start an hbase regionserver. I make sure that the dataset doesn't all fit in cache so I am getting some misses. I used YCSB to load. Then YCSB workload c to read w/ zipfian flags. Attached are the scripts I used to load and the script to run the reads.

          Show
          stack stack added a comment - I start an hbase regionserver. I make sure that the dataset doesn't all fit in cache so I am getting some misses. I used YCSB to load. Then YCSB workload c to read w/ zipfian flags. Attached are the scripts I used to load and the script to run the reads.
          Hide
          stack stack added a comment -

          These hacks of mine are based on some scripts I got from Sean Busbey

          Show
          stack stack added a comment - These hacks of mine are based on some scripts I got from Sean Busbey
          Hide
          carp84 Yu Li added a comment -

          Thank you boss, will wait for your update then.

          Show
          carp84 Yu Li added a comment - Thank you boss, will wait for your update then.
          Hide
          ben.manes Ben Manes added a comment -

          Sorry that I haven't had time to investigate this and wrap up the ticket. Its been the usual hectic end of year, but I will try to get to it soonish.

          Show
          ben.manes Ben Manes added a comment - Sorry that I haven't had time to investigate this and wrap up the ticket. Its been the usual hectic end of year, but I will try to get to it soonish.
          Hide
          stack stack added a comment -

          Thanks for update Ben Manes Understood.

          Show
          stack stack added a comment - Thanks for update Ben Manes Understood.

            People

            • Assignee:
              ben.manes Ben Manes
              Reporter:
              ben.manes Ben Manes
            • Votes:
              0 Vote for this issue
              Watchers:
              24 Start watching this issue

              Dates

              • Created:
                Updated:

                Development