Details

    • Type: Sub-task
    • Status: Patch Available
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: 2.0.0
    • Component/s: BucketCache
    • Labels:
      None

      Description

      We keep Bucket entry map in BucketCache. Below is the math for heapSize for the key , value into this map.
      BlockCacheKey
      ---------------
      String hfileName - Ref - 4
      long offset - 8
      BlockType blockType - Ref - 4
      boolean isPrimaryReplicaBlock - 1
      Total = 12 (Object) + 17 = 29

      BucketEntry
      ------------
      int offsetBase - 4
      int length - 4
      byte offset1 - 1
      byte deserialiserIndex - 1
      long accessCounter - 8
      BlockPriority priority - Ref - 4
      volatile boolean markedForEvict - 1
      AtomicInteger refCount - 16 + 4
      long cachedTime - 8
      Total = 12 (Object) + 51 = 63

      ConcurrentHashMap Map.Entry - 40
      blocksByHFile ConcurrentSkipListSet Entry - 40

      Total = 29 + 63 + 80 = 172

      For 10 million blocks we will end up having 1.6GB of heap size.
      This jira aims to reduce this as much as possible

      1. HBASE-17819_V3.patch
        23 kB
        Anoop Sam John
      2. HBASE-17819_V2.patch
        21 kB
        Anoop Sam John
      3. HBASE-17819_V1.patch
        20 kB
        Anoop Sam John

        Issue Links

          Activity

          Hide
          stack stack added a comment -

          Any progress here lads? Thanks. Anoop Sam John

          Show
          stack stack added a comment - Any progress here lads? Thanks. Anoop Sam John
          Hide
          anoop.hbase Anoop Sam John added a comment -

          Will speed up these pending works Stack.. Let me take it up next week. Thanks for the remind

          Show
          anoop.hbase Anoop Sam John added a comment - Will speed up these pending works Stack.. Let me take it up next week. Thanks for the remind
          Hide
          anoop.hbase Anoop Sam John added a comment -

          These are things trying out
          1. We have 2 Enum refs in key and BucketEntry . Changing those to bytes types and just storing the ordinal. We have few items only in the Enum and byte type is enough
          Result : Saving 6 bytes per entry
          2. Changing the BucketEntry so that we have 2 classes for BucketEntries to IOEngine like File mode and RAM backed IOEngine. Only in RAM backed, we need ref count way. In file mode, we will remove this state and markedForEvict.
          Result : Saving 21 bytes per entry for File mode
          3. Changing the refCount type from AtomicInteger to be a volatile int. AtomicInteger object and its refs in BucketEntry takes 20 bytes where was an int can work with 4 bytes. On the atomic increment/decrement, we will mimic what AtomicInteger is doing (Using unsafe CAS)
          Result : Saving 16 bytes per entry for RAM backed IOEngine
          4. Removing the CSLM for tracking per HFile blocks. So for removing blocks when an HFile is closed, we will have to iterate over all bucket entries and check for its HFile and then remove. This is what we do in LRU cache. Considering this operation not happening in a hot path, it is ok? We are doing this when CompactedHFilesDischarger runs (in 2 mns interval) and remove all compacted away files.
          Result : Saving 40 bytes per entry

          Show
          anoop.hbase Anoop Sam John added a comment - These are things trying out 1. We have 2 Enum refs in key and BucketEntry . Changing those to bytes types and just storing the ordinal. We have few items only in the Enum and byte type is enough Result : Saving 6 bytes per entry 2. Changing the BucketEntry so that we have 2 classes for BucketEntries to IOEngine like File mode and RAM backed IOEngine. Only in RAM backed, we need ref count way. In file mode, we will remove this state and markedForEvict. Result : Saving 21 bytes per entry for File mode 3. Changing the refCount type from AtomicInteger to be a volatile int. AtomicInteger object and its refs in BucketEntry takes 20 bytes where was an int can work with 4 bytes. On the atomic increment/decrement, we will mimic what AtomicInteger is doing (Using unsafe CAS) Result : Saving 16 bytes per entry for RAM backed IOEngine 4. Removing the CSLM for tracking per HFile blocks. So for removing blocks when an HFile is closed, we will have to iterate over all bucket entries and check for its HFile and then remove. This is what we do in LRU cache. Considering this operation not happening in a hot path, it is ok? We are doing this when CompactedHFilesDischarger runs (in 2 mns interval) and remove all compacted away files. Result : Saving 40 bytes per entry
          Hide
          anoop.hbase Anoop Sam John added a comment -

          We have a config to say whether blocks belonging to an HFile to be evicted when the file is closed.
          key : 'hbase.rs.evictblocksonclose'
          This defaults to false only.
          Means the blocks wont be forcefully evicted when a file is closed but eventually the LRU nature will remove these blocks.
          This is what was happening before we had CompactedHFilesDischarger? ramkrishna.s.vasudevan?
          Now CompactedHFilesDischarger seems not considering this config!

          Show
          anoop.hbase Anoop Sam John added a comment - We have a config to say whether blocks belonging to an HFile to be evicted when the file is closed. key : 'hbase.rs.evictblocksonclose' This defaults to false only. Means the blocks wont be forcefully evicted when a file is closed but eventually the LRU nature will remove these blocks. This is what was happening before we had CompactedHFilesDischarger? ramkrishna.s.vasudevan ? Now CompactedHFilesDischarger seems not considering this config!
          Hide
          ram_krish ramkrishna.s.vasudevan added a comment -

          Now CompactedHFilesDischarger seems not considering this config!

          Even previously when a compacted file was being closed those blocks were forcefully evicted.
          The above configuration ''hbase.rs.evictblocksonclose'' was mainly for when a HStore is closed and not when a compacted file was getting closed. If we need this behaviour we can see how can be added to CompactedHFilesDischarger but it was not something that was missed.

          Show
          ram_krish ramkrishna.s.vasudevan added a comment - Now CompactedHFilesDischarger seems not considering this config! Even previously when a compacted file was being closed those blocks were forcefully evicted. The above configuration ''hbase.rs.evictblocksonclose'' was mainly for when a HStore is closed and not when a compacted file was getting closed. If we need this behaviour we can see how can be added to CompactedHFilesDischarger but it was not something that was missed.
          Hide
          anoop.hbase Anoop Sam John added a comment -

          I see. Thanks for the confirm. I thought while doing the CompactedHFilesDischarger way, we missed it some how.

          Show
          anoop.hbase Anoop Sam John added a comment - I see. Thanks for the confirm. I thought while doing the CompactedHFilesDischarger way, we missed it some how.
          Hide
          anoop.hbase Anoop Sam John added a comment -

          Now the CompactedHFilesDischarger is running at interval and that run might remove many compacted files. So instead of one by one removal of blocks per file, we can do it as a list of files. Need to see how effective we can do.

          Show
          anoop.hbase Anoop Sam John added a comment - Now the CompactedHFilesDischarger is running at interval and that run might remove many compacted files. So instead of one by one removal of blocks per file, we can do it as a list of files. Need to see how effective we can do.
          Hide
          anoop.hbase Anoop Sam John added a comment -

          V1 patch. The evictByHFile() doing a loop per entry file removal. This is been called from CompactedHFilesDischarger per file. May be some optimization we can think of?

          Show
          anoop.hbase Anoop Sam John added a comment - V1 patch. The evictByHFile() doing a loop per entry file removal. This is been called from CompactedHFilesDischarger per file. May be some optimization we can think of?
          Hide
          anoop.hbase Anoop Sam John added a comment -

          See what QA says

          Show
          anoop.hbase Anoop Sam John added a comment - See what QA says
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 20s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          0 mvndep 0m 16s Maven dependency ordering for branch
          +1 mvninstall 3m 17s master passed
          +1 compile 0m 53s master passed
          +1 checkstyle 0m 39s master passed
          +1 mvneclipse 0m 24s master passed
          -1 findbugs 2m 39s hbase-server in master has 9 extant Findbugs warnings.
          +1 javadoc 0m 42s master passed
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 1m 0s the patch passed
          +1 compile 0m 52s the patch passed
          -1 javac 0m 38s hbase-server generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6)
          +1 checkstyle 0m 39s the patch passed
          +1 mvneclipse 0m 24s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 hadoopcheck 32m 7s Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4.
          +1 findbugs 4m 6s the patch passed
          +1 javadoc 0m 53s the patch passed
          +1 unit 2m 25s hbase-common in the patch passed.
          -1 unit 40m 6s hbase-server in the patch failed.
          +1 asflicense 0m 17s The patch does not generate ASF License warnings.
          93m 19s



          Reason Tests
          Failed junit tests hadoop.hbase.io.hfile.TestLazyDataBlockDecompression
            hadoop.hbase.regionserver.TestBlocksScanned
            hadoop.hbase.coprocessor.TestCoprocessorInterface
            hadoop.hbase.io.hfile.TestPrefetch
            hadoop.hbase.io.hfile.TestHFileEncryption
            hadoop.hbase.regionserver.TestScanner
            hadoop.hbase.io.hfile.TestHFile
            hadoop.hbase.io.TestHalfStoreFileReader
            hadoop.hbase.regionserver.TestHStoreFile
          Timed out junit tests org.apache.hadoop.hbase.io.hfile.bucket.TestBucketCache



          Subsystem Report/Notes
          Docker Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37
          JIRA Issue HBASE-17819
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12877957/HBASE-17819_V1.patch
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux 43cb8762c75c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / 6b7ebc0
          Default Java 1.8.0_131
          findbugs v3.1.0-RC3
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/7714/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          javac https://builds.apache.org/job/PreCommit-HBASE-Build/7714/artifact/patchprocess/diff-compile-javac-hbase-server.txt
          whitespace https://builds.apache.org/job/PreCommit-HBASE-Build/7714/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/7714/artifact/patchprocess/patch-unit-hbase-server.txt
          Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/7714/testReport/
          modules C: hbase-common hbase-server U: .
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/7714/console
          Powered by Apache Yetus 0.4.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 20s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. 0 mvndep 0m 16s Maven dependency ordering for branch +1 mvninstall 3m 17s master passed +1 compile 0m 53s master passed +1 checkstyle 0m 39s master passed +1 mvneclipse 0m 24s master passed -1 findbugs 2m 39s hbase-server in master has 9 extant Findbugs warnings. +1 javadoc 0m 42s master passed 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 0s the patch passed +1 compile 0m 52s the patch passed -1 javac 0m 38s hbase-server generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6) +1 checkstyle 0m 39s the patch passed +1 mvneclipse 0m 24s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 hadoopcheck 32m 7s Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. +1 findbugs 4m 6s the patch passed +1 javadoc 0m 53s the patch passed +1 unit 2m 25s hbase-common in the patch passed. -1 unit 40m 6s hbase-server in the patch failed. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 93m 19s Reason Tests Failed junit tests hadoop.hbase.io.hfile.TestLazyDataBlockDecompression   hadoop.hbase.regionserver.TestBlocksScanned   hadoop.hbase.coprocessor.TestCoprocessorInterface   hadoop.hbase.io.hfile.TestPrefetch   hadoop.hbase.io.hfile.TestHFileEncryption   hadoop.hbase.regionserver.TestScanner   hadoop.hbase.io.hfile.TestHFile   hadoop.hbase.io.TestHalfStoreFileReader   hadoop.hbase.regionserver.TestHStoreFile Timed out junit tests org.apache.hadoop.hbase.io.hfile.bucket.TestBucketCache Subsystem Report/Notes Docker Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 JIRA Issue HBASE-17819 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12877957/HBASE-17819_V1.patch Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux 43cb8762c75c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / 6b7ebc0 Default Java 1.8.0_131 findbugs v3.1.0-RC3 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/7714/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html javac https://builds.apache.org/job/PreCommit-HBASE-Build/7714/artifact/patchprocess/diff-compile-javac-hbase-server.txt whitespace https://builds.apache.org/job/PreCommit-HBASE-Build/7714/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/7714/artifact/patchprocess/patch-unit-hbase-server.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/7714/testReport/ modules C: hbase-common hbase-server U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/7714/console Powered by Apache Yetus 0.4.0 http://yetus.apache.org This message was automatically generated.
          Hide
          ram_krish ramkrishna.s.vasudevan added a comment -

          The evictByHFile() doing a loop per entry file removal. This is been called from CompactedHFilesDischarger per file. May be some optimization we can think of?

          If we are sure that we need not have the map then one way is to use the config to say if we are ok with deleting the block from the cache or let us live with it and rather then normal evict algo deletes the block as part of eviction. Rather than iterating it every time.

          Show
          ram_krish ramkrishna.s.vasudevan added a comment - The evictByHFile() doing a loop per entry file removal. This is been called from CompactedHFilesDischarger per file. May be some optimization we can think of? If we are sure that we need not have the map then one way is to use the config to say if we are ok with deleting the block from the cache or let us live with it and rather then normal evict algo deletes the block as part of eviction. Rather than iterating it every time.
          Hide
          anoop.hbase Anoop Sam John added a comment -

          Yep. Even if we don't evict it immediately , eventually those will get evicted. The config default value is false. As of now, the config is been checked while closing a Store. Means if the region is moving to some other RS or split or merge, Table is deleted. (All these cases, the Store will get closed). Same way for the compaction case also, am +1 to check this config. Any way even if the blocks removal happen, it wont delay the compaction op as this will happen by the CompactedHFilesDischarger chore at a later point of time.

          Show
          anoop.hbase Anoop Sam John added a comment - Yep. Even if we don't evict it immediately , eventually those will get evicted. The config default value is false. As of now, the config is been checked while closing a Store. Means if the region is moving to some other RS or split or merge, Table is deleted. (All these cases, the Store will get closed). Same way for the compaction case also, am +1 to check this config. Any way even if the blocks removal happen, it wont delay the compaction op as this will happen by the CompactedHFilesDischarger chore at a later point of time.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          0 mvndep 0m 31s Maven dependency ordering for branch
          +1 mvninstall 3m 41s master passed
          +1 compile 0m 57s master passed
          +1 checkstyle 0m 40s master passed
          +1 mvneclipse 0m 24s master passed
          +1 findbugs 3m 12s master passed
          +1 javadoc 0m 42s master passed
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 1m 1s the patch passed
          +1 compile 0m 52s the patch passed
          -1 javac 0m 37s hbase-server generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6)
          +1 checkstyle 0m 40s the patch passed
          +1 mvneclipse 0m 24s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 hadoopcheck 31m 52s Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4.
          +1 findbugs 3m 44s the patch passed
          +1 javadoc 0m 44s the patch passed
          +1 unit 2m 16s hbase-common in the patch passed.
          -1 unit 37m 19s hbase-server in the patch failed.
          +1 asflicense 0m 18s The patch does not generate ASF License warnings.
          90m 19s



          Reason Tests
          Failed junit tests hadoop.hbase.io.hfile.TestBlockCacheReporting
          Timed out junit tests org.apache.hadoop.hbase.io.hfile.bucket.TestBucketCache



          Subsystem Report/Notes
          Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37
          JIRA Issue HBASE-17819
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12878099/HBASE-17819_V2.patch
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux ad6195b089bf 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / 01db60d
          Default Java 1.8.0_131
          findbugs v3.1.0-RC3
          javac https://builds.apache.org/job/PreCommit-HBASE-Build/7727/artifact/patchprocess/diff-compile-javac-hbase-server.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/7727/artifact/patchprocess/patch-unit-hbase-server.txt
          Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/7727/testReport/
          modules C: hbase-common hbase-server U: .
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/7727/console
          Powered by Apache Yetus 0.4.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. 0 mvndep 0m 31s Maven dependency ordering for branch +1 mvninstall 3m 41s master passed +1 compile 0m 57s master passed +1 checkstyle 0m 40s master passed +1 mvneclipse 0m 24s master passed +1 findbugs 3m 12s master passed +1 javadoc 0m 42s master passed 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 1s the patch passed +1 compile 0m 52s the patch passed -1 javac 0m 37s hbase-server generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6) +1 checkstyle 0m 40s the patch passed +1 mvneclipse 0m 24s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 hadoopcheck 31m 52s Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. +1 findbugs 3m 44s the patch passed +1 javadoc 0m 44s the patch passed +1 unit 2m 16s hbase-common in the patch passed. -1 unit 37m 19s hbase-server in the patch failed. +1 asflicense 0m 18s The patch does not generate ASF License warnings. 90m 19s Reason Tests Failed junit tests hadoop.hbase.io.hfile.TestBlockCacheReporting Timed out junit tests org.apache.hadoop.hbase.io.hfile.bucket.TestBucketCache Subsystem Report/Notes Docker Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37 JIRA Issue HBASE-17819 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12878099/HBASE-17819_V2.patch Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux ad6195b089bf 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / 01db60d Default Java 1.8.0_131 findbugs v3.1.0-RC3 javac https://builds.apache.org/job/PreCommit-HBASE-Build/7727/artifact/patchprocess/diff-compile-javac-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/7727/artifact/patchprocess/patch-unit-hbase-server.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/7727/testReport/ modules C: hbase-common hbase-server U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/7727/console Powered by Apache Yetus 0.4.0 http://yetus.apache.org This message was automatically generated.
          Hide
          vrodionov Vladimir Rodionov added a comment - - edited

          BlockCacheKey
          ---------------
          String hfileName - Ref - 4
          long offset - 8
          BlockType blockType - Ref - 4
          boolean isPrimaryReplicaBlock - 1
          Total = 12 (Object) + 17 = 29
          BucketEntry
          ------------
          int offsetBase - 4
          int length - 4
          byte offset1 - 1
          byte deserialiserIndex - 1
          long accessCounter - 8
          BlockPriority priority - Ref - 4
          volatile boolean markedForEvict - 1
          AtomicInteger refCount - 16 + 4
          long cachedTime - 8
          Total = 12 (Object) + 51 = 63
          ConcurrentHashMap Map.Entry - 40
          blocksByHFile ConcurrentSkipListSet Entry - 40
          Total = 29 + 63 + 80 = 172

          Just couple corrections on you math, guys

          1. compressed OOP (obj ref = 4 bytes) works up to 30.5GB of heap size. Many users already have more than that
          2. object's fields layout is slightly different: n-byte types are aligned on n-bytes boundaries, therefore if you have for example, boolean and long fields in the object, the object's size is going to be 16 (overhead) + 8 + 8 = 32 and not 16 + 1+ 8. You should take into account also that total object size is always multiple of 8, so if you get 42, then its actually - 48, because next object starts on a 8-byte boundary.

          You can shave some bytes by just rearranging fields in the object in size descending order: first go 8 byte types (obj ref, long, double), followed by 4-byte types (int, float), 2-byte types (short, char) and 1-byte types (bool, byte) at the end

          Show
          vrodionov Vladimir Rodionov added a comment - - edited BlockCacheKey --------------- String hfileName - Ref - 4 long offset - 8 BlockType blockType - Ref - 4 boolean isPrimaryReplicaBlock - 1 Total = 12 (Object) + 17 = 29 BucketEntry ------------ int offsetBase - 4 int length - 4 byte offset1 - 1 byte deserialiserIndex - 1 long accessCounter - 8 BlockPriority priority - Ref - 4 volatile boolean markedForEvict - 1 AtomicInteger refCount - 16 + 4 long cachedTime - 8 Total = 12 (Object) + 51 = 63 ConcurrentHashMap Map.Entry - 40 blocksByHFile ConcurrentSkipListSet Entry - 40 Total = 29 + 63 + 80 = 172 Just couple corrections on you math, guys compressed OOP (obj ref = 4 bytes) works up to 30.5GB of heap size. Many users already have more than that object's fields layout is slightly different: n-byte types are aligned on n-bytes boundaries, therefore if you have for example, boolean and long fields in the object, the object's size is going to be 16 (overhead) + 8 + 8 = 32 and not 16 + 1+ 8. You should take into account also that total object size is always multiple of 8, so if you get 42, then its actually - 48, because next object starts on a 8-byte boundary. You can shave some bytes by just rearranging fields in the object in size descending order: first go 8 byte types (obj ref, long, double), followed by 4-byte types (int, float), 2-byte types (short, char) and 1-byte types (bool, byte) at the end
          Hide
          anoop.hbase Anoop Sam John added a comment -

          Thanks. Ya will do that also. Here some fields and refs am trying to avoid so as to save some heap space per Bucket entry.

          Show
          anoop.hbase Anoop Sam John added a comment - Thanks. Ya will do that also. Here some fields and refs am trying to avoid so as to save some heap space per Bucket entry.
          Hide
          anoop.hbase Anoop Sam John added a comment - - edited

          BlockCacheKey after align the heap overhead is 32 bytes and even after the change of ref to byte it will be 32. Still it will be worth doing that as above 32 GB heap size, compressed ref wont be there and refs might take 8 bytes. Then it will make a diff.
          BucketEntry was 64 bytes heap and after the patch it will be 48.
          Also we will be removing 40 bytes per entry as we remove blocksByHFile Set.
          So the math is
          Now - 32 + 64 + 40 + 40 = 176
          After patch - 32 + 48 + 40 = 120

          Tested with Java Instrumentation

          Show
          anoop.hbase Anoop Sam John added a comment - - edited BlockCacheKey after align the heap overhead is 32 bytes and even after the change of ref to byte it will be 32. Still it will be worth doing that as above 32 GB heap size, compressed ref wont be there and refs might take 8 bytes. Then it will make a diff. BucketEntry was 64 bytes heap and after the patch it will be 48. Also we will be removing 40 bytes per entry as we remove blocksByHFile Set. So the math is Now - 32 + 64 + 40 + 40 = 176 After patch - 32 + 48 + 40 = 120 Tested with Java Instrumentation
          Hide
          yuzhihong@gmail.com Ted Yu added a comment -
          +  // We would like to reduce the head overhead per object of this type as much as possible.
          

          Remove "head"

          +public class SharedMemoryBucketEntry extends BucketEntry {
          

          Add short javadoc for the above class.

          Please check the two failed tests.

          Show
          yuzhihong@gmail.com Ted Yu added a comment - + // We would like to reduce the head overhead per object of this type as much as possible. Remove "head" + public class SharedMemoryBucketEntry extends BucketEntry { Add short javadoc for the above class. Please check the two failed tests.
          Hide
          ram_krish ramkrishna.s.vasudevan added a comment -

          private volatile boolean markedForEvict;

          Minor nit : This comes in both the new impl? Should we add another abstract class and have this markedForEvict there.

          Still it will be worth doing that as above 32 GB heap size, compressed ref wont be there and refs might take 8 bytes

          Compressed ref are not applied in bigger heap sizes? or is it just related to 64 bit or 32 bit architecture?
          Rest looks good to me.

          Show
          ram_krish ramkrishna.s.vasudevan added a comment - private volatile boolean markedForEvict; Minor nit : This comes in both the new impl? Should we add another abstract class and have this markedForEvict there. Still it will be worth doing that as above 32 GB heap size, compressed ref wont be there and refs might take 8 bytes Compressed ref are not applied in bigger heap sizes? or is it just related to 64 bit or 32 bit architecture? Rest looks good to me.
          Hide
          anoop.hbase Anoop Sam John added a comment - - edited

          To let know the approach. This is bit diff from V2 patch. Major changes are
          1. BucketEntry is extended to make the SharedMemory BucketEntry. For file mode, there is no need to keep the ref count as that is not shared memory type. So I removed those new states added for 11425 from BucketEntry. For off heap mode BucketEntry, we have an extension now where we have the new states.
          2. Removed the CSLM for keeping the HFilename based blocks info. The evictBlocksByHfileName will have a perf impact as it has to iterate through all the entries to know each of the block entry belong to this file or not. For that changed the evictBlocksByHfileName to be an async op way. A dedicated eviction thread will do this work. ANy way even if we dont remove these blocks or have delay in removal, eventually these block will get removed as we have LRU algo for the eviction. So when there are no space left for the new blocks addition, eviction would happen, removing unused blocks. More over, eviction of blocks on HFile close is default off only (We have a config to turn this off). When it is compaction , for the compacted files, we have evictByHFiles happening now. There will be bit more delay for the actual removal of the blocks.
          But we save lot of heap memory per entry now as per this approach. The math is there in above comment

          Now - 32 + 64 + 40 + 40 = 176
          After patch - 32 + 48 + 40 = 120
          Tested with Java Instrumentation

          Show
          anoop.hbase Anoop Sam John added a comment - - edited To let know the approach. This is bit diff from V2 patch. Major changes are 1. BucketEntry is extended to make the SharedMemory BucketEntry. For file mode, there is no need to keep the ref count as that is not shared memory type. So I removed those new states added for 11425 from BucketEntry. For off heap mode BucketEntry, we have an extension now where we have the new states. 2. Removed the CSLM for keeping the HFilename based blocks info. The evictBlocksByHfileName will have a perf impact as it has to iterate through all the entries to know each of the block entry belong to this file or not. For that changed the evictBlocksByHfileName to be an async op way. A dedicated eviction thread will do this work. ANy way even if we dont remove these blocks or have delay in removal, eventually these block will get removed as we have LRU algo for the eviction. So when there are no space left for the new blocks addition, eviction would happen, removing unused blocks. More over, eviction of blocks on HFile close is default off only (We have a config to turn this off). When it is compaction , for the compacted files, we have evictByHFiles happening now. There will be bit more delay for the actual removal of the blocks. But we save lot of heap memory per entry now as per this approach. The math is there in above comment Now - 32 + 64 + 40 + 40 = 176 After patch - 32 + 48 + 40 = 120 Tested with Java Instrumentation
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 2m 16s Docker mode activated.
                Prechecks
          0 findbugs 0m 0s Findbugs executables are not available.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
                master Compile Tests
          0 mvndep 0m 26s Maven dependency ordering for branch
          +1 mvninstall 4m 37s master passed
          +1 compile 0m 57s master passed
          +1 checkstyle 1m 16s master passed
          +1 shadedjars 5m 56s branch has no errors when building our shaded downstream artifacts.
          +1 javadoc 0m 38s master passed
                Patch Compile Tests
          0 mvndep 0m 13s Maven dependency ordering for patch
          +1 mvninstall 4m 35s the patch passed
          +1 compile 0m 57s the patch passed
          -1 javac 0m 42s hbase-server generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6)
          -1 checkstyle 1m 4s hbase-server: The patch generated 1 new + 145 unchanged - 2 fixed = 146 total (was 147)
          -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 shadedjars 4m 44s patch has no errors when building our shaded downstream artifacts.
          +1 hadoopcheck 48m 37s Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4.
          +1 javadoc 0m 41s the patch passed
                Other Tests
          -1 unit 39m 19s hbase-server in the patch failed.
          +1 unit 0m 23s hbase-external-blockcache in the patch passed.
          +1 asflicense 0m 26s The patch does not generate ASF License warnings.
          111m 36s



          Subsystem Report/Notes
          Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01
          JIRA Issue HBASE-17819
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12895615/HBASE-17819_V3.patch
          Optional Tests asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
          uname Linux 7178527a3271 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / e79a007dd9
          Default Java 1.8.0_141
          javac https://builds.apache.org/job/PreCommit-HBASE-Build/9592/artifact/patchprocess/diff-compile-javac-hbase-server.txt
          checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/9592/artifact/patchprocess/diff-checkstyle-hbase-server.txt
          whitespace https://builds.apache.org/job/PreCommit-HBASE-Build/9592/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/9592/artifact/patchprocess/patch-unit-hbase-server.txt
          Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/9592/testReport/
          modules C: hbase-server hbase-external-blockcache U: .
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/9592/console
          Powered by Apache Yetus 0.5.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 2m 16s Docker mode activated.       Prechecks 0 findbugs 0m 0s Findbugs executables are not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       master Compile Tests 0 mvndep 0m 26s Maven dependency ordering for branch +1 mvninstall 4m 37s master passed +1 compile 0m 57s master passed +1 checkstyle 1m 16s master passed +1 shadedjars 5m 56s branch has no errors when building our shaded downstream artifacts. +1 javadoc 0m 38s master passed       Patch Compile Tests 0 mvndep 0m 13s Maven dependency ordering for patch +1 mvninstall 4m 35s the patch passed +1 compile 0m 57s the patch passed -1 javac 0m 42s hbase-server generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6) -1 checkstyle 1m 4s hbase-server: The patch generated 1 new + 145 unchanged - 2 fixed = 146 total (was 147) -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 shadedjars 4m 44s patch has no errors when building our shaded downstream artifacts. +1 hadoopcheck 48m 37s Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. +1 javadoc 0m 41s the patch passed       Other Tests -1 unit 39m 19s hbase-server in the patch failed. +1 unit 0m 23s hbase-external-blockcache in the patch passed. +1 asflicense 0m 26s The patch does not generate ASF License warnings. 111m 36s Subsystem Report/Notes Docker Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 JIRA Issue HBASE-17819 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12895615/HBASE-17819_V3.patch Optional Tests asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile uname Linux 7178527a3271 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / e79a007dd9 Default Java 1.8.0_141 javac https://builds.apache.org/job/PreCommit-HBASE-Build/9592/artifact/patchprocess/diff-compile-javac-hbase-server.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/9592/artifact/patchprocess/diff-checkstyle-hbase-server.txt whitespace https://builds.apache.org/job/PreCommit-HBASE-Build/9592/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/9592/artifact/patchprocess/patch-unit-hbase-server.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/9592/testReport/ modules C: hbase-server hbase-external-blockcache U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/9592/console Powered by Apache Yetus 0.5.0 http://yetus.apache.org This message was automatically generated.
          Hide
          stack stack added a comment -

          Put it up on rb Anoop Sam John

          How did you do the size compare?

          #2 sounds good. I like the keeping it simple. Will have to see #1.

          Show
          stack stack added a comment - Put it up on rb Anoop Sam John How did you do the size compare? #2 sounds good. I like the keeping it simple. Will have to see #1.
          Hide
          yuzhihong@gmail.com Ted Yu added a comment -

          eviction of blocks on HFile close is default off only (We have a config to turn this off).

          State off is mentioned twice above.
          Can you clarify ?

          Show
          yuzhihong@gmail.com Ted Yu added a comment - eviction of blocks on HFile close is default off only (We have a config to turn this off). State off is mentioned twice above. Can you clarify ?
          Hide
          anoop.hbase Anoop Sam John added a comment -

          Typo.
          eviction of blocks on HFile close is default off only (We have a config to turn this ON).

          How did you do the size compare?

          Tested with Java Instrumentation

          Show
          anoop.hbase Anoop Sam John added a comment - Typo. eviction of blocks on HFile close is default off only (We have a config to turn this ON). How did you do the size compare? Tested with Java Instrumentation
          Hide
          stack stack added a comment -

          Tested with Java Instrumentation

          Yeah, sorry, what? How did you do the sizeof? Thanks.

          Show
          stack stack added a comment - Tested with Java Instrumentation Yeah, sorry, what? How did you do the sizeof? Thanks.
          Hide
          ram_krish ramkrishna.s.vasudevan added a comment -

          Patch LGTM.

          Show
          ram_krish ramkrishna.s.vasudevan added a comment - Patch LGTM.
          Hide
          ram_krish ramkrishna.s.vasudevan added a comment -

          Yeah, sorry, what? How did you do the sizeof? Thanks.

          I think JMC can help here? In case some other reports are needed?

          Show
          ram_krish ramkrishna.s.vasudevan added a comment - Yeah, sorry, what? How did you do the sizeof? Thanks. I think JMC can help here? In case some other reports are needed?

            People

            • Assignee:
              anoop.hbase Anoop Sam John
              Reporter:
              anoop.hbase Anoop Sam John
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:

                Development